Sep 8 23:52:04.968381 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 8 23:52:04.968417 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:52:04.968430 kernel: BIOS-provided physical RAM map: Sep 8 23:52:04.968437 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 8 23:52:04.968443 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 8 23:52:04.968450 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 8 23:52:04.968458 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 8 23:52:04.968465 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 8 23:52:04.968472 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 8 23:52:04.968478 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 8 23:52:04.968485 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 8 23:52:04.968494 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 8 23:52:04.968503 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 8 23:52:04.968510 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 8 23:52:04.968521 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 8 23:52:04.968528 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 8 23:52:04.968538 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 8 23:52:04.968545 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 8 23:52:04.968553 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 8 23:52:04.968560 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 8 23:52:04.968567 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 8 23:52:04.968574 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 8 23:52:04.968581 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 8 23:52:04.968588 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:52:04.968596 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 8 23:52:04.968603 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:52:04.968610 kernel: NX (Execute Disable) protection: active Sep 8 23:52:04.968620 kernel: APIC: Static calls initialized Sep 8 23:52:04.968627 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 8 23:52:04.968634 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 8 23:52:04.968641 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 8 23:52:04.968648 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 8 23:52:04.968655 kernel: extended physical RAM map: Sep 8 23:52:04.968662 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 8 23:52:04.968670 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 8 23:52:04.968677 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 8 23:52:04.968684 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 8 23:52:04.968691 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 8 23:52:04.968698 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 8 23:52:04.968716 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 8 23:52:04.968727 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 8 23:52:04.968735 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 8 23:52:04.968742 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 8 23:52:04.968750 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 8 23:52:04.968757 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 8 23:52:04.968769 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 8 23:52:04.968777 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 8 23:52:04.968784 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 8 23:52:04.968792 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 8 23:52:04.968799 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 8 23:52:04.968807 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 8 23:52:04.968814 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 8 23:52:04.968822 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 8 23:52:04.968829 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 8 23:52:04.968839 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 8 23:52:04.968846 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 8 23:52:04.968854 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 8 23:52:04.968861 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:52:04.968871 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 8 23:52:04.968878 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:52:04.968886 kernel: efi: EFI v2.7 by EDK II Sep 8 23:52:04.968893 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 8 23:52:04.968901 kernel: random: crng init done Sep 8 23:52:04.968908 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 8 23:52:04.968916 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 8 23:52:04.968925 kernel: secureboot: Secure boot disabled Sep 8 23:52:04.968935 kernel: SMBIOS 2.8 present. Sep 8 23:52:04.968943 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 8 23:52:04.968950 kernel: Hypervisor detected: KVM Sep 8 23:52:04.968958 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 8 23:52:04.968965 kernel: kvm-clock: using sched offset of 3509350218 cycles Sep 8 23:52:04.968973 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 8 23:52:04.968981 kernel: tsc: Detected 2794.748 MHz processor Sep 8 23:52:04.968989 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 8 23:52:04.968997 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 8 23:52:04.969018 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 8 23:52:04.969030 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 8 23:52:04.969037 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 8 23:52:04.969045 kernel: Using GB pages for direct mapping Sep 8 23:52:04.969053 kernel: ACPI: Early table checksum verification disabled Sep 8 23:52:04.969060 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 8 23:52:04.969068 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:52:04.969076 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:04.969084 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:04.969091 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 8 23:52:04.969101 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:04.969109 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:04.969117 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:04.969125 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:04.969132 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 8 23:52:04.969140 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 8 23:52:04.969148 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 8 23:52:04.969155 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 8 23:52:04.969163 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 8 23:52:04.969173 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 8 23:52:04.969181 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 8 23:52:04.969188 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 8 23:52:04.969196 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 8 23:52:04.969203 kernel: No NUMA configuration found Sep 8 23:52:04.969211 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 8 23:52:04.969218 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 8 23:52:04.969226 kernel: Zone ranges: Sep 8 23:52:04.969233 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 8 23:52:04.969244 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 8 23:52:04.969251 kernel: Normal empty Sep 8 23:52:04.969261 kernel: Movable zone start for each node Sep 8 23:52:04.969269 kernel: Early memory node ranges Sep 8 23:52:04.969276 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 8 23:52:04.969284 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 8 23:52:04.969292 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 8 23:52:04.969299 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 8 23:52:04.969307 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 8 23:52:04.969317 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 8 23:52:04.969324 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 8 23:52:04.969332 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 8 23:52:04.969339 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 8 23:52:04.969347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:52:04.969355 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 8 23:52:04.969371 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 8 23:52:04.969381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:52:04.969389 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 8 23:52:04.969397 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 8 23:52:04.969405 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 8 23:52:04.969415 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 8 23:52:04.969425 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 8 23:52:04.969433 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 8 23:52:04.969443 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 8 23:52:04.969454 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 8 23:52:04.969465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 8 23:52:04.969480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 8 23:52:04.969490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 8 23:52:04.969498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 8 23:52:04.969506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 8 23:52:04.969514 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 8 23:52:04.969522 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 8 23:52:04.969530 kernel: TSC deadline timer available Sep 8 23:52:04.969538 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 8 23:52:04.969545 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 8 23:52:04.969556 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 8 23:52:04.969564 kernel: kvm-guest: setup PV sched yield Sep 8 23:52:04.969572 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 8 23:52:04.969580 kernel: Booting paravirtualized kernel on KVM Sep 8 23:52:04.969588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 8 23:52:04.969597 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 8 23:52:04.969608 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 8 23:52:04.969619 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 8 23:52:04.969629 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 8 23:52:04.969642 kernel: kvm-guest: PV spinlocks enabled Sep 8 23:52:04.969650 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 8 23:52:04.969660 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:52:04.969668 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:52:04.969676 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:52:04.969687 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:52:04.969695 kernel: Fallback order for Node 0: 0 Sep 8 23:52:04.969703 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 8 23:52:04.969722 kernel: Policy zone: DMA32 Sep 8 23:52:04.969730 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:52:04.969739 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 177824K reserved, 0K cma-reserved) Sep 8 23:52:04.969747 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:52:04.969755 kernel: ftrace: allocating 37943 entries in 149 pages Sep 8 23:52:04.969762 kernel: ftrace: allocated 149 pages with 4 groups Sep 8 23:52:04.969770 kernel: Dynamic Preempt: voluntary Sep 8 23:52:04.969778 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:52:04.969787 kernel: rcu: RCU event tracing is enabled. Sep 8 23:52:04.969798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:52:04.969806 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:52:04.969814 kernel: Rude variant of Tasks RCU enabled. Sep 8 23:52:04.969822 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:52:04.969830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:52:04.969838 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:52:04.969846 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 8 23:52:04.969854 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:52:04.969862 kernel: Console: colour dummy device 80x25 Sep 8 23:52:04.969872 kernel: printk: console [ttyS0] enabled Sep 8 23:52:04.969880 kernel: ACPI: Core revision 20230628 Sep 8 23:52:04.969888 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 8 23:52:04.969896 kernel: APIC: Switch to symmetric I/O mode setup Sep 8 23:52:04.969904 kernel: x2apic enabled Sep 8 23:52:04.969912 kernel: APIC: Switched APIC routing to: physical x2apic Sep 8 23:52:04.969922 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 8 23:52:04.969931 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 8 23:52:04.969939 kernel: kvm-guest: setup PV IPIs Sep 8 23:52:04.969949 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 8 23:52:04.969957 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 8 23:52:04.969965 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 8 23:52:04.969973 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 8 23:52:04.969981 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 8 23:52:04.969989 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 8 23:52:04.969997 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 8 23:52:04.970019 kernel: Spectre V2 : Mitigation: Retpolines Sep 8 23:52:04.970028 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 8 23:52:04.970039 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 8 23:52:04.970047 kernel: active return thunk: retbleed_return_thunk Sep 8 23:52:04.970055 kernel: RETBleed: Mitigation: untrained return thunk Sep 8 23:52:04.970063 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 8 23:52:04.970071 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 8 23:52:04.970079 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 8 23:52:04.970088 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 8 23:52:04.970098 kernel: active return thunk: srso_return_thunk Sep 8 23:52:04.970106 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 8 23:52:04.970118 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 8 23:52:04.970128 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 8 23:52:04.970138 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 8 23:52:04.970148 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 8 23:52:04.970158 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 8 23:52:04.970168 kernel: Freeing SMP alternatives memory: 32K Sep 8 23:52:04.970178 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:52:04.970187 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:52:04.970197 kernel: landlock: Up and running. Sep 8 23:52:04.970210 kernel: SELinux: Initializing. Sep 8 23:52:04.970220 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:52:04.970230 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:52:04.970240 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 8 23:52:04.970250 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:52:04.970260 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:52:04.970270 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:52:04.970280 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 8 23:52:04.970294 kernel: ... version: 0 Sep 8 23:52:04.970303 kernel: ... bit width: 48 Sep 8 23:52:04.970313 kernel: ... generic registers: 6 Sep 8 23:52:04.970322 kernel: ... value mask: 0000ffffffffffff Sep 8 23:52:04.970330 kernel: ... max period: 00007fffffffffff Sep 8 23:52:04.970338 kernel: ... fixed-purpose events: 0 Sep 8 23:52:04.970345 kernel: ... event mask: 000000000000003f Sep 8 23:52:04.970353 kernel: signal: max sigframe size: 1776 Sep 8 23:52:04.970361 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:52:04.970369 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:52:04.970380 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:52:04.970388 kernel: smpboot: x86: Booting SMP configuration: Sep 8 23:52:04.970395 kernel: .... node #0, CPUs: #1 #2 #3 Sep 8 23:52:04.970403 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:52:04.970411 kernel: smpboot: Max logical packages: 1 Sep 8 23:52:04.970419 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 8 23:52:04.970427 kernel: devtmpfs: initialized Sep 8 23:52:04.970435 kernel: x86/mm: Memory block size: 128MB Sep 8 23:52:04.970443 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 8 23:52:04.970453 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 8 23:52:04.970462 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 8 23:52:04.970470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 8 23:52:04.970477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 8 23:52:04.970486 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 8 23:52:04.970494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:52:04.970502 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:52:04.970510 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:52:04.970520 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:52:04.970528 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:52:04.970536 kernel: audit: type=2000 audit(1757375524.266:1): state=initialized audit_enabled=0 res=1 Sep 8 23:52:04.970544 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:52:04.970552 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 8 23:52:04.970560 kernel: cpuidle: using governor menu Sep 8 23:52:04.970568 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:52:04.970576 kernel: dca service started, version 1.12.1 Sep 8 23:52:04.970584 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 8 23:52:04.970594 kernel: PCI: Using configuration type 1 for base access Sep 8 23:52:04.970602 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 8 23:52:04.970610 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:52:04.970618 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:52:04.970626 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:52:04.970634 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:52:04.970642 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:52:04.970650 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:52:04.970658 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:52:04.970668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:52:04.970676 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 8 23:52:04.970684 kernel: ACPI: Interpreter enabled Sep 8 23:52:04.970692 kernel: ACPI: PM: (supports S0 S3 S5) Sep 8 23:52:04.970699 kernel: ACPI: Using IOAPIC for interrupt routing Sep 8 23:52:04.970715 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 8 23:52:04.970723 kernel: PCI: Using E820 reservations for host bridge windows Sep 8 23:52:04.970731 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 8 23:52:04.970739 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:52:04.970978 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:52:04.971167 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 8 23:52:04.971302 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 8 23:52:04.971313 kernel: PCI host bridge to bus 0000:00 Sep 8 23:52:04.971461 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 8 23:52:04.971585 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 8 23:52:04.971720 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 8 23:52:04.971843 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 8 23:52:04.971963 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 8 23:52:04.972116 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 8 23:52:04.972245 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:52:04.972420 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 8 23:52:04.972571 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 8 23:52:04.972722 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 8 23:52:04.972882 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 8 23:52:04.973033 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 8 23:52:04.973168 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 8 23:52:04.973318 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 8 23:52:04.973487 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:52:04.973631 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 8 23:52:04.973782 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 8 23:52:04.973918 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 8 23:52:04.974096 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 8 23:52:04.974236 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 8 23:52:04.974369 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 8 23:52:04.974502 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 8 23:52:04.974648 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 8 23:52:04.974798 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 8 23:52:04.974954 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 8 23:52:04.975102 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 8 23:52:04.975263 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 8 23:52:04.975428 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 8 23:52:04.975561 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 8 23:52:04.975722 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 8 23:52:04.975858 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 8 23:52:04.975989 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 8 23:52:04.976187 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 8 23:52:04.976323 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 8 23:52:04.976335 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 8 23:52:04.976343 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 8 23:52:04.976352 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 8 23:52:04.976365 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 8 23:52:04.976372 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 8 23:52:04.976380 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 8 23:52:04.976388 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 8 23:52:04.976396 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 8 23:52:04.976404 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 8 23:52:04.976412 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 8 23:52:04.976420 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 8 23:52:04.976427 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 8 23:52:04.976438 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 8 23:52:04.976446 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 8 23:52:04.976454 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 8 23:52:04.976462 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 8 23:52:04.976470 kernel: iommu: Default domain type: Translated Sep 8 23:52:04.976477 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 8 23:52:04.976485 kernel: efivars: Registered efivars operations Sep 8 23:52:04.976493 kernel: PCI: Using ACPI for IRQ routing Sep 8 23:52:04.976501 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 8 23:52:04.976512 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 8 23:52:04.976519 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 8 23:52:04.976527 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 8 23:52:04.976535 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 8 23:52:04.976543 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 8 23:52:04.976551 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 8 23:52:04.976559 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 8 23:52:04.976567 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 8 23:52:04.976703 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 8 23:52:04.976845 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 8 23:52:04.976976 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 8 23:52:04.976987 kernel: vgaarb: loaded Sep 8 23:52:04.976996 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 8 23:52:04.977016 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 8 23:52:04.977025 kernel: clocksource: Switched to clocksource kvm-clock Sep 8 23:52:04.977033 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:52:04.977041 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:52:04.977053 kernel: pnp: PnP ACPI init Sep 8 23:52:04.977248 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 8 23:52:04.977265 kernel: pnp: PnP ACPI: found 6 devices Sep 8 23:52:04.977275 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 8 23:52:04.977285 kernel: NET: Registered PF_INET protocol family Sep 8 23:52:04.977320 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:52:04.977334 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:52:04.977342 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:52:04.977353 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:52:04.977361 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:52:04.977370 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:52:04.977378 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:52:04.977386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:52:04.977394 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:52:04.977405 kernel: NET: Registered PF_XDP protocol family Sep 8 23:52:04.977559 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 8 23:52:04.977702 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 8 23:52:04.977837 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 8 23:52:04.977960 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 8 23:52:04.978109 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 8 23:52:04.978232 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 8 23:52:04.978352 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 8 23:52:04.978472 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 8 23:52:04.978483 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:52:04.978497 kernel: Initialise system trusted keyrings Sep 8 23:52:04.978505 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:52:04.978513 kernel: Key type asymmetric registered Sep 8 23:52:04.978521 kernel: Asymmetric key parser 'x509' registered Sep 8 23:52:04.978530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 8 23:52:04.978538 kernel: io scheduler mq-deadline registered Sep 8 23:52:04.978546 kernel: io scheduler kyber registered Sep 8 23:52:04.978555 kernel: io scheduler bfq registered Sep 8 23:52:04.978563 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 8 23:52:04.978574 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 8 23:52:04.978583 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 8 23:52:04.978594 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 8 23:52:04.978602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:52:04.978611 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 8 23:52:04.978619 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 8 23:52:04.978630 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 8 23:52:04.978638 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 8 23:52:04.978795 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 8 23:52:04.978925 kernel: rtc_cmos 00:04: registered as rtc0 Sep 8 23:52:04.978936 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 8 23:52:04.979078 kernel: rtc_cmos 00:04: setting system clock to 2025-09-08T23:52:04 UTC (1757375524) Sep 8 23:52:04.979208 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 8 23:52:04.979226 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 8 23:52:04.979239 kernel: efifb: probing for efifb Sep 8 23:52:04.979247 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 8 23:52:04.979256 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 8 23:52:04.979281 kernel: efifb: scrolling: redraw Sep 8 23:52:04.979291 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 8 23:52:04.979308 kernel: Console: switching to colour frame buffer device 160x50 Sep 8 23:52:04.979317 kernel: fb0: EFI VGA frame buffer device Sep 8 23:52:04.979334 kernel: pstore: Using crash dump compression: deflate Sep 8 23:52:04.979343 kernel: pstore: Registered efi_pstore as persistent store backend Sep 8 23:52:04.979355 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:52:04.979364 kernel: Segment Routing with IPv6 Sep 8 23:52:04.979372 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:52:04.979380 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:52:04.979388 kernel: Key type dns_resolver registered Sep 8 23:52:04.979396 kernel: IPI shorthand broadcast: enabled Sep 8 23:52:04.979404 kernel: sched_clock: Marking stable (1270003369, 153827571)->(1442371927, -18540987) Sep 8 23:52:04.979413 kernel: registered taskstats version 1 Sep 8 23:52:04.979421 kernel: Loading compiled-in X.509 certificates Sep 8 23:52:04.979433 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 8 23:52:04.979441 kernel: Key type .fscrypt registered Sep 8 23:52:04.979449 kernel: Key type fscrypt-provisioning registered Sep 8 23:52:04.979457 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:52:04.979466 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:52:04.979474 kernel: ima: No architecture policies found Sep 8 23:52:04.979482 kernel: clk: Disabling unused clocks Sep 8 23:52:04.979501 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 8 23:52:04.979509 kernel: Write protecting the kernel read-only data: 38912k Sep 8 23:52:04.979521 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 8 23:52:04.979530 kernel: Run /init as init process Sep 8 23:52:04.979538 kernel: with arguments: Sep 8 23:52:04.979546 kernel: /init Sep 8 23:52:04.979554 kernel: with environment: Sep 8 23:52:04.979562 kernel: HOME=/ Sep 8 23:52:04.979570 kernel: TERM=linux Sep 8 23:52:04.979580 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:52:04.979592 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:52:04.979607 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:52:04.979617 systemd[1]: Detected virtualization kvm. Sep 8 23:52:04.979625 systemd[1]: Detected architecture x86-64. Sep 8 23:52:04.979634 systemd[1]: Running in initrd. Sep 8 23:52:04.979643 systemd[1]: No hostname configured, using default hostname. Sep 8 23:52:04.979652 systemd[1]: Hostname set to . Sep 8 23:52:04.979661 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:52:04.979672 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:52:04.979681 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:52:04.979690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:52:04.979700 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:52:04.979716 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:52:04.979726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:52:04.979735 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:52:04.979748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:52:04.979757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:52:04.979766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:52:04.979775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:52:04.979784 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:52:04.979793 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:52:04.979802 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:52:04.979810 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:52:04.979822 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:52:04.979830 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:52:04.979839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:52:04.979848 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:52:04.979857 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:52:04.979865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:52:04.979874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:52:04.979884 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:52:04.979892 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:52:04.979904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:52:04.979912 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:52:04.979921 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:52:04.979930 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:52:04.979939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:52:04.979947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:04.979956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:52:04.979965 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:52:04.979977 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:52:04.979986 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:52:04.980080 systemd-journald[192]: Collecting audit messages is disabled. Sep 8 23:52:04.980105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:04.980115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:52:04.980124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:52:04.980133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:52:04.980142 systemd-journald[192]: Journal started Sep 8 23:52:04.980166 systemd-journald[192]: Runtime Journal (/run/log/journal/99514cb6cb164feba18f4a01e8ae5289) is 6M, max 48.2M, 42.2M free. Sep 8 23:52:04.958948 systemd-modules-load[195]: Inserted module 'overlay' Sep 8 23:52:04.983107 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:52:04.987305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:52:04.988028 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:52:04.990806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:04.993454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:52:04.996076 kernel: Bridge firewalling registered Sep 8 23:52:04.993836 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 8 23:52:04.994963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:52:05.008197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:52:05.009755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:52:05.021106 dracut-cmdline[223]: dracut-dracut-053 Sep 8 23:52:05.022950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:52:05.026291 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:52:05.032366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:52:05.041264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:52:05.093943 systemd-resolved[248]: Positive Trust Anchors: Sep 8 23:52:05.093968 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:52:05.093999 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:52:05.106717 systemd-resolved[248]: Defaulting to hostname 'linux'. Sep 8 23:52:05.109673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:52:05.110373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:52:05.133071 kernel: SCSI subsystem initialized Sep 8 23:52:05.142043 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:52:05.157060 kernel: iscsi: registered transport (tcp) Sep 8 23:52:05.180058 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:52:05.180165 kernel: QLogic iSCSI HBA Driver Sep 8 23:52:05.234397 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:52:05.253172 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:52:05.279922 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:52:05.279986 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:52:05.280000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:52:05.324062 kernel: raid6: avx2x4 gen() 30203 MB/s Sep 8 23:52:05.341039 kernel: raid6: avx2x2 gen() 30413 MB/s Sep 8 23:52:05.358074 kernel: raid6: avx2x1 gen() 25473 MB/s Sep 8 23:52:05.358117 kernel: raid6: using algorithm avx2x2 gen() 30413 MB/s Sep 8 23:52:05.376104 kernel: raid6: .... xor() 19755 MB/s, rmw enabled Sep 8 23:52:05.376159 kernel: raid6: using avx2x2 recovery algorithm Sep 8 23:52:05.397039 kernel: xor: automatically using best checksumming function avx Sep 8 23:52:05.550042 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:52:05.564407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:52:05.581246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:52:05.601078 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 8 23:52:05.608198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:52:05.621260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:52:05.635319 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Sep 8 23:52:05.670671 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:52:05.685211 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:52:05.756487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:52:05.766200 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:52:05.780864 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:52:05.785157 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:52:05.788175 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:52:05.789584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:52:05.804032 kernel: cryptd: max_cpu_qlen set to 1000 Sep 8 23:52:05.808467 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 8 23:52:05.808189 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:52:05.814300 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:52:05.825438 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:52:05.828359 kernel: libata version 3.00 loaded. Sep 8 23:52:05.830082 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:52:05.830110 kernel: GPT:9289727 != 19775487 Sep 8 23:52:05.830132 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:52:05.831107 kernel: GPT:9289727 != 19775487 Sep 8 23:52:05.831133 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:52:05.833470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:05.833495 kernel: AVX2 version of gcm_enc/dec engaged. Sep 8 23:52:05.833507 kernel: AES CTR mode by8 optimization enabled Sep 8 23:52:05.839088 kernel: ahci 0000:00:1f.2: version 3.0 Sep 8 23:52:05.843155 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 8 23:52:05.843192 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 8 23:52:05.843435 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 8 23:52:05.850049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:52:05.850281 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:05.851262 kernel: scsi host0: ahci Sep 8 23:52:05.853025 kernel: scsi host1: ahci Sep 8 23:52:05.855636 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:52:05.857477 kernel: scsi host2: ahci Sep 8 23:52:05.857659 kernel: scsi host3: ahci Sep 8 23:52:05.865181 kernel: scsi host4: ahci Sep 8 23:52:05.865385 kernel: scsi host5: ahci Sep 8 23:52:05.860417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:52:05.873669 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 8 23:52:05.873697 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 8 23:52:05.873709 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 8 23:52:05.873720 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 8 23:52:05.873734 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 8 23:52:05.873744 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 8 23:52:05.873755 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (463) Sep 8 23:52:05.860637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:05.878427 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (474) Sep 8 23:52:05.862558 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:05.874817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:05.895807 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:52:05.931447 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:52:05.940394 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:52:05.941734 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:52:05.954538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:52:05.974158 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:52:05.975269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:52:05.975330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:05.977598 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:05.980534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:05.982537 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:05.986265 disk-uuid[557]: Primary Header is updated. Sep 8 23:52:05.986265 disk-uuid[557]: Secondary Entries is updated. Sep 8 23:52:05.986265 disk-uuid[557]: Secondary Header is updated. Sep 8 23:52:05.991045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:05.996044 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:06.000099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:06.010325 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:52:06.060619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:06.183037 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:06.183112 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:06.184038 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:06.185037 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 8 23:52:06.185060 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 8 23:52:06.186030 kernel: ata3.00: applying bridge limits Sep 8 23:52:06.187034 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:06.187064 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:06.188032 kernel: ata3.00: configured for UDMA/100 Sep 8 23:52:06.190035 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 8 23:52:06.239517 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 8 23:52:06.239798 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 8 23:52:06.252032 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 8 23:52:06.996049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:06.997371 disk-uuid[560]: The operation has completed successfully. Sep 8 23:52:07.036367 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:52:07.036509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:52:07.084202 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:52:07.087689 sh[598]: Success Sep 8 23:52:07.103036 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 8 23:52:07.141707 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:52:07.160403 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:52:07.163744 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:52:07.181499 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 8 23:52:07.181587 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:07.181601 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:52:07.182591 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:52:07.183323 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:52:07.189460 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:52:07.190450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:52:07.205261 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:52:07.207465 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:52:07.229427 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:07.229523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:07.229551 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:52:07.234036 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:52:07.240042 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:07.246274 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:52:07.252245 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:52:07.405514 ignition[681]: Ignition 2.20.0 Sep 8 23:52:07.405536 ignition[681]: Stage: fetch-offline Sep 8 23:52:07.405615 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:07.405630 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:07.405824 ignition[681]: parsed url from cmdline: "" Sep 8 23:52:07.405828 ignition[681]: no config URL provided Sep 8 23:52:07.405840 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:52:07.405868 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:52:07.405917 ignition[681]: op(1): [started] loading QEMU firmware config module Sep 8 23:52:07.405922 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:52:07.415593 ignition[681]: op(1): [finished] loading QEMU firmware config module Sep 8 23:52:07.430673 ignition[681]: parsing config with SHA512: 25d1c895ad6b74f0e81a5ef19dcc0f46509fbf94d1fe524824dd77f40e134e6d7f23ec2a6f805cbf384c4b461b90cd77a5a4b706ccb76dde2bfdbabcfc98eba2 Sep 8 23:52:07.436673 unknown[681]: fetched base config from "system" Sep 8 23:52:07.436686 unknown[681]: fetched user config from "qemu" Sep 8 23:52:07.438141 ignition[681]: fetch-offline: fetch-offline passed Sep 8 23:52:07.437305 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:52:07.438337 ignition[681]: Ignition finished successfully Sep 8 23:52:07.450147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:52:07.451350 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:52:07.479159 systemd-networkd[784]: lo: Link UP Sep 8 23:52:07.479169 systemd-networkd[784]: lo: Gained carrier Sep 8 23:52:07.481096 systemd-networkd[784]: Enumeration completed Sep 8 23:52:07.481186 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:52:07.481495 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:07.481499 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:52:07.482460 systemd-networkd[784]: eth0: Link UP Sep 8 23:52:07.482464 systemd-networkd[784]: eth0: Gained carrier Sep 8 23:52:07.482471 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:07.483674 systemd[1]: Reached target network.target - Network. Sep 8 23:52:07.485991 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:52:07.502063 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:52:07.502159 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:52:07.635815 ignition[788]: Ignition 2.20.0 Sep 8 23:52:07.635832 ignition[788]: Stage: kargs Sep 8 23:52:07.636101 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:07.636117 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:07.637244 ignition[788]: kargs: kargs passed Sep 8 23:52:07.637307 ignition[788]: Ignition finished successfully Sep 8 23:52:07.641146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:52:07.657323 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:52:07.672748 ignition[798]: Ignition 2.20.0 Sep 8 23:52:07.672774 ignition[798]: Stage: disks Sep 8 23:52:07.673072 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:07.677484 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:52:07.673087 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:07.678955 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:52:07.674188 ignition[798]: disks: disks passed Sep 8 23:52:07.681165 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:52:07.674243 ignition[798]: Ignition finished successfully Sep 8 23:52:07.681698 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:52:07.682321 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:52:07.682711 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:52:07.699194 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:52:07.713262 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:52:07.720880 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:52:07.732178 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:52:07.743461 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.53 Sep 8 23:52:07.743484 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 8 23:52:07.828037 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 8 23:52:07.828494 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:52:07.830557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:52:07.844172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:52:07.846741 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:52:07.847574 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:52:07.847623 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:52:07.847668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:52:07.858057 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (818) Sep 8 23:52:07.858084 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:07.855549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:52:07.862854 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:07.862877 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:52:07.859149 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:52:07.866052 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:52:07.868977 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:52:07.912846 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:52:07.917807 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:52:07.923216 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:52:07.928525 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:52:08.037364 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:52:08.057258 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:52:08.060085 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:52:08.084088 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:08.121230 ignition[931]: INFO : Ignition 2.20.0 Sep 8 23:52:08.122576 ignition[931]: INFO : Stage: mount Sep 8 23:52:08.123772 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:08.123772 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:08.125367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:52:08.128716 ignition[931]: INFO : mount: mount passed Sep 8 23:52:08.129472 ignition[931]: INFO : Ignition finished successfully Sep 8 23:52:08.131929 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:52:08.146257 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:52:08.180888 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:52:08.194295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:52:08.202995 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (945) Sep 8 23:52:08.203077 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:08.203093 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:08.204427 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:52:08.208025 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:52:08.210546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:52:08.234153 ignition[962]: INFO : Ignition 2.20.0 Sep 8 23:52:08.235406 ignition[962]: INFO : Stage: files Sep 8 23:52:08.235406 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:08.235406 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:08.238587 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:52:08.238587 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:52:08.238587 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:52:08.242650 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:52:08.242650 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:52:08.242650 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:52:08.240450 unknown[962]: wrote ssh authorized keys file for user: core Sep 8 23:52:08.248024 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 8 23:52:08.248024 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 8 23:52:08.371352 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:52:08.592864 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 8 23:52:08.594848 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:52:08.596466 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:52:08.598086 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:52:08.599849 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:52:08.601569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:52:08.603375 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:52:08.605058 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:52:08.606913 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:52:08.608933 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:52:08.610888 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:52:08.612603 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 8 23:52:08.615182 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 8 23:52:08.617572 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 8 23:52:08.619684 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 8 23:52:08.992503 systemd-networkd[784]: eth0: Gained IPv6LL Sep 8 23:52:09.115640 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 8 23:52:12.287057 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 8 23:52:12.287057 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 8 23:52:12.292355 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:52:12.328038 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:52:12.333523 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:52:12.335422 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:52:12.335422 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:52:12.335422 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:52:12.335422 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:52:12.335422 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:52:12.335422 ignition[962]: INFO : files: files passed Sep 8 23:52:12.335422 ignition[962]: INFO : Ignition finished successfully Sep 8 23:52:12.338094 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:52:12.354251 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:52:12.357748 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:52:12.359916 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:52:12.360063 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:52:12.369380 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:52:12.372201 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:52:12.372201 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:52:12.377439 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:52:12.374741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:52:12.378262 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:52:12.385204 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:52:12.413377 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:52:12.413553 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:52:12.416409 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:52:12.417943 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:52:12.419947 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:52:12.435328 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:52:12.452388 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:52:12.467310 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:52:12.478701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:52:12.480239 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:52:12.482903 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:52:12.485174 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:52:12.485339 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:52:12.487882 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:52:12.489873 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:52:12.492260 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:52:12.494932 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:52:12.497209 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:52:12.499715 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:52:12.502154 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:52:12.504876 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:52:12.507220 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:52:12.510462 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:52:12.512582 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:52:12.512759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:52:12.515210 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:52:12.517079 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:52:12.519559 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:52:12.519688 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:52:12.522116 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:52:12.522286 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:52:12.524823 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:52:12.524968 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:52:12.527228 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:52:12.529300 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:52:12.531085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:52:12.533518 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:52:12.535692 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:52:12.537939 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:52:12.538074 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:52:12.540404 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:52:12.540532 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:52:12.542643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:52:12.542777 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:52:12.545247 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:52:12.545361 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:52:12.559265 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:52:12.560813 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:52:12.560994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:52:12.564777 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:52:12.566933 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:52:12.567204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:52:12.569698 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:52:12.569966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:52:12.577106 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:52:12.577261 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:52:12.582182 ignition[1017]: INFO : Ignition 2.20.0 Sep 8 23:52:12.582182 ignition[1017]: INFO : Stage: umount Sep 8 23:52:12.582182 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:12.582182 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:12.582182 ignition[1017]: INFO : umount: umount passed Sep 8 23:52:12.582182 ignition[1017]: INFO : Ignition finished successfully Sep 8 23:52:12.588491 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:52:12.589530 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:52:12.592450 systemd[1]: Stopped target network.target - Network. Sep 8 23:52:12.594289 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:52:12.595309 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:52:12.597680 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:52:12.598754 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:52:12.600877 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:52:12.601943 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:52:12.604073 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:52:12.605212 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:52:12.607469 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:52:12.609769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:52:12.613421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:52:12.615202 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:52:12.616320 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:52:12.618666 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:52:12.619826 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:52:12.624720 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:52:12.626220 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:52:12.627313 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:52:12.630614 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:52:12.633642 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:52:12.634618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:52:12.636990 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:52:12.637947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:52:12.651140 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:52:12.651633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:52:12.651710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:52:12.652062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:52:12.652133 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:52:12.656529 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:52:12.656587 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:52:12.656869 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:52:12.656916 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:52:12.661696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:52:12.665107 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:52:12.665197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:12.673149 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:52:12.673315 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:52:12.678933 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:52:12.680033 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:52:12.682862 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:52:12.682920 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:52:12.685977 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:52:12.686037 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:52:12.688919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:52:12.688986 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:52:12.692092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:52:12.692152 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:52:12.694939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:52:12.694999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:12.712293 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:52:12.714538 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:52:12.714619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:52:12.718064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:52:12.718124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:12.722191 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:52:12.722281 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:12.725419 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:52:12.726737 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:52:12.729598 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:52:12.751431 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:52:12.761188 systemd[1]: Switching root. Sep 8 23:52:12.803461 systemd-journald[192]: Journal stopped Sep 8 23:52:15.201537 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 8 23:52:15.202976 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:52:15.203028 kernel: SELinux: policy capability open_perms=1 Sep 8 23:52:15.203049 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:52:15.203066 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:52:15.203092 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:52:15.203108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:52:15.203128 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:52:15.203144 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:52:15.203160 kernel: audit: type=1403 audit(1757375533.766:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:52:15.203184 systemd[1]: Successfully loaded SELinux policy in 48.555ms. Sep 8 23:52:15.203221 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.825ms. Sep 8 23:52:15.203241 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:52:15.203260 systemd[1]: Detected virtualization kvm. Sep 8 23:52:15.203278 systemd[1]: Detected architecture x86-64. Sep 8 23:52:15.203296 systemd[1]: Detected first boot. Sep 8 23:52:15.203312 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:52:15.203329 zram_generator::config[1063]: No configuration found. Sep 8 23:52:15.203353 kernel: Guest personality initialized and is inactive Sep 8 23:52:15.203384 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 8 23:52:15.203400 kernel: Initialized host personality Sep 8 23:52:15.203425 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:52:15.203462 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:52:15.203487 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:52:15.203515 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:52:15.203537 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:52:15.203555 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:52:15.203579 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:52:15.203601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:52:15.203626 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:52:15.203644 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:52:15.203661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:52:15.203679 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:52:15.203696 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:52:15.203712 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:52:15.203742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:52:15.203763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:52:15.203789 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:52:15.203807 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:52:15.203832 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:52:15.203852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:52:15.203880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 8 23:52:15.203902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:52:15.203929 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:52:15.203947 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:52:15.203994 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:52:15.204047 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:52:15.204066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:52:15.204091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:52:15.204109 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:52:15.204126 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:52:15.204143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:52:15.204166 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:52:15.204184 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:52:15.204202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:52:15.204219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:52:15.204236 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:52:15.204253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:52:15.204271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:52:15.204294 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:52:15.204312 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:52:15.204336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:15.204354 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:52:15.204370 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:52:15.204387 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:52:15.204406 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:52:15.204423 systemd[1]: Reached target machines.target - Containers. Sep 8 23:52:15.204453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:52:15.204474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:15.204498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:52:15.204516 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:52:15.204533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:15.204550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:52:15.204568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:15.204586 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:52:15.204604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:15.204622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:52:15.204641 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:52:15.204672 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:52:15.204708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:52:15.204738 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:52:15.204759 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:15.204777 kernel: loop: module loaded Sep 8 23:52:15.204795 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:52:15.204820 kernel: fuse: init (API version 7.39) Sep 8 23:52:15.204840 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:52:15.204858 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:52:15.204899 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:52:15.204918 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:52:15.204934 kernel: ACPI: bus type drm_connector registered Sep 8 23:52:15.204950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:52:15.204971 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:52:15.204988 systemd[1]: Stopped verity-setup.service. Sep 8 23:52:15.205023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:15.205044 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:52:15.205066 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:52:15.205083 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:52:15.205100 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:52:15.205119 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:52:15.205155 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:52:15.205201 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:52:15.205221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:52:15.205277 systemd-journald[1141]: Collecting audit messages is disabled. Sep 8 23:52:15.205320 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:52:15.205346 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:52:15.205364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:15.205382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:15.205399 systemd-journald[1141]: Journal started Sep 8 23:52:15.206744 systemd-journald[1141]: Runtime Journal (/run/log/journal/99514cb6cb164feba18f4a01e8ae5289) is 6M, max 48.2M, 42.2M free. Sep 8 23:52:14.798890 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:52:14.814973 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:52:14.815655 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:52:15.208914 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:52:15.209745 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:52:15.210059 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:52:15.211992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:15.212316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:15.214204 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:52:15.214453 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:52:15.216347 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:15.216657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:15.218409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:52:15.220102 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:52:15.222261 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:52:15.224471 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:52:15.242594 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:52:15.254210 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:52:15.257321 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:52:15.258634 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:52:15.258682 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:52:15.261035 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:52:15.264225 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:52:15.270232 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:52:15.271942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:15.273717 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:52:15.280477 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:52:15.282896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:52:15.285213 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:52:15.292462 systemd-journald[1141]: Time spent on flushing to /var/log/journal/99514cb6cb164feba18f4a01e8ae5289 is 15.027ms for 1055 entries. Sep 8 23:52:15.292462 systemd-journald[1141]: System Journal (/var/log/journal/99514cb6cb164feba18f4a01e8ae5289) is 8M, max 195.6M, 187.6M free. Sep 8 23:52:15.408564 systemd-journald[1141]: Received client request to flush runtime journal. Sep 8 23:52:15.408666 kernel: loop0: detected capacity change from 0 to 221472 Sep 8 23:52:15.289277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:52:15.298392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:52:15.301752 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:52:15.307699 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:52:15.373092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:52:15.376289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:52:15.378108 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:52:15.380711 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:52:15.385152 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:52:15.390691 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:52:15.400316 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:52:15.403107 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:52:15.405638 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:52:15.410943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:52:15.432977 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:52:15.441197 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:52:15.443985 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:52:15.454243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:52:15.580269 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 8 23:52:15.596046 kernel: loop1: detected capacity change from 0 to 138176 Sep 8 23:52:15.609484 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 8 23:52:15.609508 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 8 23:52:15.621809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:52:15.692050 kernel: loop2: detected capacity change from 0 to 147912 Sep 8 23:52:15.741266 kernel: loop3: detected capacity change from 0 to 221472 Sep 8 23:52:15.786069 kernel: loop4: detected capacity change from 0 to 138176 Sep 8 23:52:15.805074 kernel: loop5: detected capacity change from 0 to 147912 Sep 8 23:52:15.818202 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:52:15.821839 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:52:15.824766 (sd-merge)[1209]: Merged extensions into '/usr'. Sep 8 23:52:15.830481 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:52:15.830498 systemd[1]: Reloading... Sep 8 23:52:15.997038 zram_generator::config[1234]: No configuration found. Sep 8 23:52:16.334598 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:52:16.434065 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:52:16.446998 systemd[1]: Reloading finished in 615 ms. Sep 8 23:52:16.522432 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:52:16.524682 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:52:16.545030 systemd[1]: Starting ensure-sysext.service... Sep 8 23:52:16.548611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:52:16.572101 systemd[1]: Reload requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:52:16.572124 systemd[1]: Reloading... Sep 8 23:52:16.614552 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:52:16.614925 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:52:16.617349 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:52:16.617782 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 8 23:52:16.617891 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 8 23:52:16.625291 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:52:16.625317 systemd-tmpfiles[1275]: Skipping /boot Sep 8 23:52:16.650654 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:52:16.650677 systemd-tmpfiles[1275]: Skipping /boot Sep 8 23:52:16.656077 zram_generator::config[1309]: No configuration found. Sep 8 23:52:16.782780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:52:16.859258 systemd[1]: Reloading finished in 286 ms. Sep 8 23:52:16.874129 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:52:16.893830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:52:16.905561 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:52:16.909411 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:52:16.913115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:52:16.917999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:52:16.931480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:52:16.942138 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:52:16.948045 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:52:16.952281 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:16.953459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:16.963480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:16.966449 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 8 23:52:16.968251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:16.972362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:16.974309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:16.974441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:16.978513 augenrules[1372]: No rules Sep 8 23:52:16.983415 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:52:16.988098 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:52:16.989631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:16.992515 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:52:16.993053 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:52:16.996245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:16.996523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:17.000141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:17.000412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:17.002374 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:17.002647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:17.004458 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:52:17.009083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:52:17.011635 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:52:17.041538 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:52:17.050853 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:52:17.059698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:17.064722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:52:17.066422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:17.070242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:17.077279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:52:17.082891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:17.087296 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:17.090442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:17.090499 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:17.098166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:52:17.102239 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:52:17.102288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:17.105067 systemd[1]: Finished ensure-sysext.service. Sep 8 23:52:17.106486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:17.106750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:17.109270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:17.109641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:17.243704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1384) Sep 8 23:52:17.256764 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 8 23:52:17.257303 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:17.257676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:17.259662 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:52:17.259944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:52:17.265588 augenrules[1408]: /sbin/augenrules: No change Sep 8 23:52:17.277074 augenrules[1443]: No rules Sep 8 23:52:17.279658 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:52:17.279978 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:52:17.291866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:52:17.322052 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 8 23:52:17.322461 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:52:17.323837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:52:17.323942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:52:17.326680 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:52:17.327838 systemd-resolved[1346]: Positive Trust Anchors: Sep 8 23:52:17.327854 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:52:17.327886 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:52:17.333112 kernel: ACPI: button: Power Button [PWRF] Sep 8 23:52:17.332917 systemd-resolved[1346]: Defaulting to hostname 'linux'. Sep 8 23:52:17.338906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:52:17.341081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:52:17.353344 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 8 23:52:17.369693 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:52:17.377708 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 8 23:52:17.378077 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 8 23:52:17.378256 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 8 23:52:17.378460 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 8 23:52:17.398978 systemd-networkd[1415]: lo: Link UP Sep 8 23:52:17.398989 systemd-networkd[1415]: lo: Gained carrier Sep 8 23:52:17.400932 systemd-networkd[1415]: Enumeration completed Sep 8 23:52:17.401064 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:52:17.403830 systemd[1]: Reached target network.target - Network. Sep 8 23:52:17.407419 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:17.407506 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:52:17.408973 systemd-networkd[1415]: eth0: Link UP Sep 8 23:52:17.410090 systemd-networkd[1415]: eth0: Gained carrier Sep 8 23:52:17.410132 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:17.511309 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:52:17.515562 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:52:17.517383 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:52:17.518143 systemd-networkd[1415]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:52:17.522247 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Sep 8 23:52:17.522333 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:52:17.527598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:18.294558 systemd-timesyncd[1452]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:52:18.294626 systemd-timesyncd[1452]: Initial clock synchronization to Mon 2025-09-08 23:52:18.294408 UTC. Sep 8 23:52:18.294680 systemd-resolved[1346]: Clock change detected. Flushing caches. Sep 8 23:52:18.339985 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:52:18.347180 kernel: mousedev: PS/2 mouse device common for all mice Sep 8 23:52:18.355836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:52:18.356376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:18.361306 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:18.381254 kernel: kvm_amd: TSC scaling supported Sep 8 23:52:18.381320 kernel: kvm_amd: Nested Virtualization enabled Sep 8 23:52:18.381335 kernel: kvm_amd: Nested Paging enabled Sep 8 23:52:18.381360 kernel: kvm_amd: LBR virtualization supported Sep 8 23:52:18.381391 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 8 23:52:18.381447 kernel: kvm_amd: Virtual GIF supported Sep 8 23:52:18.377289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:18.411146 kernel: EDAC MC: Ver: 3.0.0 Sep 8 23:52:18.455846 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:52:18.457640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:18.471378 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:52:18.483901 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:52:18.524298 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:52:18.526439 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:52:18.527939 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:52:18.529558 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:52:18.531194 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:52:18.533360 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:52:18.534911 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:52:18.536566 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:52:18.538121 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:52:18.538175 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:52:18.539375 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:52:18.541798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:52:18.544891 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:52:18.549240 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:52:18.550825 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:52:18.552077 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:52:18.558849 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:52:18.561002 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:52:18.564762 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:52:18.567036 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:52:18.568621 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:52:18.569877 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:52:18.571169 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:52:18.571214 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:52:18.573074 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:52:18.575640 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:52:18.576220 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:52:18.581311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:52:18.584273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:52:18.586079 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:52:18.588378 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:52:18.592969 jq[1483]: false Sep 8 23:52:18.599432 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:52:18.603303 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:52:18.607326 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:52:18.611369 dbus-daemon[1482]: [system] SELinux support is enabled Sep 8 23:52:18.615091 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:52:18.617257 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:52:18.617874 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:52:18.619117 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:52:18.624233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:52:18.626789 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:52:18.631636 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:52:18.633396 jq[1497]: true Sep 8 23:52:18.636167 extend-filesystems[1484]: Found loop3 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found loop4 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found loop5 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found sr0 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda1 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda2 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda3 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found usr Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda4 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda6 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda7 Sep 8 23:52:18.636167 extend-filesystems[1484]: Found vda9 Sep 8 23:52:18.636167 extend-filesystems[1484]: Checking size of /dev/vda9 Sep 8 23:52:18.638928 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:52:18.657615 update_engine[1496]: I20250908 23:52:18.654446 1496 main.cc:92] Flatcar Update Engine starting Sep 8 23:52:18.639299 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:52:18.639783 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:52:18.640244 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:52:18.644026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:52:18.644296 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:52:18.663624 update_engine[1496]: I20250908 23:52:18.663532 1496 update_check_scheduler.cc:74] Next update check in 2m8s Sep 8 23:52:18.667178 extend-filesystems[1484]: Resized partition /dev/vda9 Sep 8 23:52:18.676938 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:52:18.678001 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:52:18.686911 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:52:18.690638 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:52:18.690675 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:52:18.692079 tar[1503]: linux-amd64/helm Sep 8 23:52:18.692815 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:52:18.692844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:52:18.697768 jq[1505]: true Sep 8 23:52:18.695368 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:52:18.710167 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1384) Sep 8 23:52:18.724337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:52:18.729825 systemd-logind[1495]: Watching system buttons on /dev/input/event1 (Power Button) Sep 8 23:52:18.729847 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 8 23:52:18.735503 systemd-logind[1495]: New seat seat0. Sep 8 23:52:18.742248 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:52:18.778535 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:52:18.823016 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:52:18.823016 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:52:18.823016 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:52:19.002083 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Sep 8 23:52:19.001626 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:52:19.004052 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:52:19.004248 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:52:19.002026 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:52:19.005837 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:52:19.008009 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:52:19.032397 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:52:19.033841 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:52:19.043404 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:52:19.045582 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:52:19.045941 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:52:19.060780 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:52:19.082778 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:52:19.095903 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:52:19.099682 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 8 23:52:19.101550 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:52:19.544651 systemd-networkd[1415]: eth0: Gained IPv6LL Sep 8 23:52:19.590892 containerd[1510]: time="2025-09-08T23:52:19.585731304Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:52:19.642783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:52:19.644792 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:52:19.659511 containerd[1510]: time="2025-09-08T23:52:19.659414553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.661704 containerd[1510]: time="2025-09-08T23:52:19.661649635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:19.661704 containerd[1510]: time="2025-09-08T23:52:19.661683008Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:52:19.661704 containerd[1510]: time="2025-09-08T23:52:19.661699839Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:52:19.661974 containerd[1510]: time="2025-09-08T23:52:19.661947153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:52:19.661974 containerd[1510]: time="2025-09-08T23:52:19.661972200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.662978 containerd[1510]: time="2025-09-08T23:52:19.662947540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:19.662978 containerd[1510]: time="2025-09-08T23:52:19.662968269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.663310 containerd[1510]: time="2025-09-08T23:52:19.663279693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:19.663310 containerd[1510]: time="2025-09-08T23:52:19.663301614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.663360 containerd[1510]: time="2025-09-08T23:52:19.663314959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:19.663360 containerd[1510]: time="2025-09-08T23:52:19.663324697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.663474 containerd[1510]: time="2025-09-08T23:52:19.663447578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.663814 containerd[1510]: time="2025-09-08T23:52:19.663780242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:19.664024 containerd[1510]: time="2025-09-08T23:52:19.663997239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:19.664024 containerd[1510]: time="2025-09-08T23:52:19.664014681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:52:19.664179 containerd[1510]: time="2025-09-08T23:52:19.664156407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:52:19.664878 containerd[1510]: time="2025-09-08T23:52:19.664286401Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:52:19.676675 containerd[1510]: time="2025-09-08T23:52:19.676592654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:52:19.677008 containerd[1510]: time="2025-09-08T23:52:19.676961305Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:52:19.677093 containerd[1510]: time="2025-09-08T23:52:19.677077143Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:52:19.677189 containerd[1510]: time="2025-09-08T23:52:19.677172001Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:52:19.677320 containerd[1510]: time="2025-09-08T23:52:19.677300882Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:52:19.677727 containerd[1510]: time="2025-09-08T23:52:19.677704720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:52:19.678140 containerd[1510]: time="2025-09-08T23:52:19.678118526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:52:19.678437 containerd[1510]: time="2025-09-08T23:52:19.678405194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678498970Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678520851Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678537542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678555145Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678570263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678585803Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678604257Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678619987Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678635195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678649813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678692453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678718622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678734572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.678913 containerd[1510]: time="2025-09-08T23:52:19.678750672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678766351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678782942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678799554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678817758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678839318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678859386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679366 containerd[1510]: time="2025-09-08T23:52:19.678876207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679607 containerd[1510]: time="2025-09-08T23:52:19.679583454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679870 containerd[1510]: time="2025-09-08T23:52:19.679847509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.679950 containerd[1510]: time="2025-09-08T23:52:19.679934683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:52:19.680038 containerd[1510]: time="2025-09-08T23:52:19.680021976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.680128 containerd[1510]: time="2025-09-08T23:52:19.680111254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.680221 containerd[1510]: time="2025-09-08T23:52:19.680203607Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:52:19.680345 containerd[1510]: time="2025-09-08T23:52:19.680326868Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:52:19.680440 containerd[1510]: time="2025-09-08T23:52:19.680409143Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:52:19.680509 containerd[1510]: time="2025-09-08T23:52:19.680492349Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:52:19.680581 containerd[1510]: time="2025-09-08T23:52:19.680561288Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:52:19.680642 containerd[1510]: time="2025-09-08T23:52:19.680627532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.680732 containerd[1510]: time="2025-09-08T23:52:19.680715097Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:52:19.682979 containerd[1510]: time="2025-09-08T23:52:19.680793303Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:52:19.682979 containerd[1510]: time="2025-09-08T23:52:19.680816256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.681280567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.681348986Z" level=info msg="Connect containerd service" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.681399300Z" level=info msg="using legacy CRI server" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.681413066Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.681592202Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.682739083Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.682966049Z" level=info msg="Start subscribing containerd event" Sep 8 23:52:19.683077 containerd[1510]: time="2025-09-08T23:52:19.683080614Z" level=info msg="Start recovering state" Sep 8 23:52:19.683540 containerd[1510]: time="2025-09-08T23:52:19.683253007Z" level=info msg="Start event monitor" Sep 8 23:52:19.683540 containerd[1510]: time="2025-09-08T23:52:19.683273746Z" level=info msg="Start snapshots syncer" Sep 8 23:52:19.683540 containerd[1510]: time="2025-09-08T23:52:19.683286640Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:52:19.683540 containerd[1510]: time="2025-09-08T23:52:19.683296999Z" level=info msg="Start streaming server" Sep 8 23:52:19.683968 containerd[1510]: time="2025-09-08T23:52:19.683942470Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:52:19.684241 containerd[1510]: time="2025-09-08T23:52:19.684218257Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:52:19.684402 containerd[1510]: time="2025-09-08T23:52:19.684381013Z" level=info msg="containerd successfully booted in 0.138927s" Sep 8 23:52:19.690636 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:52:19.694356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:19.697115 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:52:19.698756 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:52:19.733388 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:52:19.733826 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:52:19.735777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:52:19.736978 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:52:19.941044 tar[1503]: linux-amd64/LICENSE Sep 8 23:52:19.941630 tar[1503]: linux-amd64/README.md Sep 8 23:52:19.958128 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:52:21.893804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:21.896393 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:52:21.903231 systemd[1]: Startup finished in 1.420s (kernel) + 9.023s (initrd) + 7.424s (userspace) = 17.869s. Sep 8 23:52:21.906735 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:22.777984 kubelet[1596]: E0908 23:52:22.777865 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:22.783850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:22.784180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:22.784754 systemd[1]: kubelet.service: Consumed 2.850s CPU time, 269M memory peak. Sep 8 23:52:28.517209 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:52:28.545241 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Sep 8 23:52:28.709756 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:52:28.713838 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:28.737376 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:52:28.759792 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:52:28.774565 systemd-logind[1495]: New session 1 of user core. Sep 8 23:52:28.809027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:52:28.833881 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:52:28.844426 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:52:28.856548 systemd-logind[1495]: New session c1 of user core. Sep 8 23:52:29.214252 systemd[1613]: Queued start job for default target default.target. Sep 8 23:52:29.227446 systemd[1613]: Created slice app.slice - User Application Slice. Sep 8 23:52:29.227488 systemd[1613]: Reached target paths.target - Paths. Sep 8 23:52:29.227555 systemd[1613]: Reached target timers.target - Timers. Sep 8 23:52:29.240251 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:52:29.396720 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:52:29.396943 systemd[1613]: Reached target sockets.target - Sockets. Sep 8 23:52:29.397032 systemd[1613]: Reached target basic.target - Basic System. Sep 8 23:52:29.397090 systemd[1613]: Reached target default.target - Main User Target. Sep 8 23:52:29.397163 systemd[1613]: Startup finished in 516ms. Sep 8 23:52:29.398061 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:52:29.409830 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:52:29.524872 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:32778.service - OpenSSH per-connection server daemon (10.0.0.1:32778). Sep 8 23:52:29.659261 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 32778 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:52:29.663283 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:29.684867 systemd-logind[1495]: New session 2 of user core. Sep 8 23:52:29.695511 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:52:29.784865 sshd[1626]: Connection closed by 10.0.0.1 port 32778 Sep 8 23:52:29.782142 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:29.805926 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:32778.service: Deactivated successfully. Sep 8 23:52:29.814184 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:52:29.816786 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:52:29.831596 systemd-logind[1495]: Removed session 2. Sep 8 23:52:29.855997 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:32784.service - OpenSSH per-connection server daemon (10.0.0.1:32784). Sep 8 23:52:29.955542 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 32784 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:52:29.959686 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:29.989464 systemd-logind[1495]: New session 3 of user core. Sep 8 23:52:30.003486 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:52:30.076442 sshd[1634]: Connection closed by 10.0.0.1 port 32784 Sep 8 23:52:30.079049 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:30.113996 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:32784.service: Deactivated successfully. Sep 8 23:52:30.124344 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:52:30.138399 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:52:30.166694 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:33092.service - OpenSSH per-connection server daemon (10.0.0.1:33092). Sep 8 23:52:30.173554 systemd-logind[1495]: Removed session 3. Sep 8 23:52:30.255959 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:52:30.257206 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:30.274818 systemd-logind[1495]: New session 4 of user core. Sep 8 23:52:30.291521 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:52:30.366323 sshd[1642]: Connection closed by 10.0.0.1 port 33092 Sep 8 23:52:30.364425 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:30.390343 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:33092.service: Deactivated successfully. Sep 8 23:52:30.395916 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:52:30.404692 systemd-logind[1495]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:52:30.418562 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:33094.service - OpenSSH per-connection server daemon (10.0.0.1:33094). Sep 8 23:52:30.419741 systemd-logind[1495]: Removed session 4. Sep 8 23:52:30.506740 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 33094 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:52:30.509834 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:30.534303 systemd-logind[1495]: New session 5 of user core. Sep 8 23:52:30.548539 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:52:30.634728 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:52:30.636879 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:52:32.005547 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:52:32.005822 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:52:32.826576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:52:32.836743 dockerd[1671]: time="2025-09-08T23:52:32.836573697Z" level=info msg="Starting up" Sep 8 23:52:32.838050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:33.180562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:33.191440 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:33.560323 kubelet[1702]: E0908 23:52:33.560123 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:33.568740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:33.569017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:33.569488 systemd[1]: kubelet.service: Consumed 466ms CPU time, 113M memory peak. Sep 8 23:52:33.866082 dockerd[1671]: time="2025-09-08T23:52:33.865993624Z" level=info msg="Loading containers: start." Sep 8 23:52:34.098129 kernel: Initializing XFRM netlink socket Sep 8 23:52:34.211944 systemd-networkd[1415]: docker0: Link UP Sep 8 23:52:34.260737 dockerd[1671]: time="2025-09-08T23:52:34.260662541Z" level=info msg="Loading containers: done." Sep 8 23:52:34.282706 dockerd[1671]: time="2025-09-08T23:52:34.282620668Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:52:34.282965 dockerd[1671]: time="2025-09-08T23:52:34.282774977Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:52:34.283017 dockerd[1671]: time="2025-09-08T23:52:34.282971005Z" level=info msg="Daemon has completed initialization" Sep 8 23:52:34.327659 dockerd[1671]: time="2025-09-08T23:52:34.327550002Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:52:34.328115 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:52:35.639064 containerd[1510]: time="2025-09-08T23:52:35.638983817Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 8 23:52:36.363634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763064974.mount: Deactivated successfully. Sep 8 23:52:39.287457 containerd[1510]: time="2025-09-08T23:52:39.287360679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:39.290242 containerd[1510]: time="2025-09-08T23:52:39.290130385Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 8 23:52:39.291760 containerd[1510]: time="2025-09-08T23:52:39.291643713Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:39.295926 containerd[1510]: time="2025-09-08T23:52:39.295826539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:39.297666 containerd[1510]: time="2025-09-08T23:52:39.297606027Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 3.658529145s" Sep 8 23:52:39.297666 containerd[1510]: time="2025-09-08T23:52:39.297665719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 8 23:52:39.299032 containerd[1510]: time="2025-09-08T23:52:39.298540199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 8 23:52:43.573160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 8 23:52:43.587398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:43.774396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:43.779382 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:44.450055 kubelet[1945]: E0908 23:52:44.449978 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:44.454932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:44.455225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:44.455734 systemd[1]: kubelet.service: Consumed 290ms CPU time, 111.1M memory peak. Sep 8 23:52:47.510088 containerd[1510]: time="2025-09-08T23:52:47.509986696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:47.522082 containerd[1510]: time="2025-09-08T23:52:47.521980332Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 8 23:52:47.527152 containerd[1510]: time="2025-09-08T23:52:47.526997914Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:47.544994 containerd[1510]: time="2025-09-08T23:52:47.544360711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:47.546580 containerd[1510]: time="2025-09-08T23:52:47.546467824Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 8.24786661s" Sep 8 23:52:47.546580 containerd[1510]: time="2025-09-08T23:52:47.546561489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 8 23:52:47.548552 containerd[1510]: time="2025-09-08T23:52:47.547902114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 8 23:52:50.095759 containerd[1510]: time="2025-09-08T23:52:50.095679676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:50.145264 containerd[1510]: time="2025-09-08T23:52:50.145167001Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 8 23:52:50.192928 containerd[1510]: time="2025-09-08T23:52:50.192832688Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:50.214513 containerd[1510]: time="2025-09-08T23:52:50.214454564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:50.215857 containerd[1510]: time="2025-09-08T23:52:50.215826768Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.667878858s" Sep 8 23:52:50.215934 containerd[1510]: time="2025-09-08T23:52:50.215866953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 8 23:52:50.216509 containerd[1510]: time="2025-09-08T23:52:50.216484231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 8 23:52:52.727997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78978889.mount: Deactivated successfully. Sep 8 23:52:53.555836 containerd[1510]: time="2025-09-08T23:52:53.555768361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:53.557543 containerd[1510]: time="2025-09-08T23:52:53.557507379Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 8 23:52:53.558786 containerd[1510]: time="2025-09-08T23:52:53.558749997Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:53.561396 containerd[1510]: time="2025-09-08T23:52:53.561346957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:53.561978 containerd[1510]: time="2025-09-08T23:52:53.561939702Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 3.345424843s" Sep 8 23:52:53.561978 containerd[1510]: time="2025-09-08T23:52:53.561969659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 8 23:52:53.564366 containerd[1510]: time="2025-09-08T23:52:53.564337400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 8 23:52:54.462565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912678530.mount: Deactivated successfully. Sep 8 23:52:54.463931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 8 23:52:54.473428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:54.664185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:54.670436 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:54.997198 kubelet[1982]: E0908 23:52:54.997083 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:55.001709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:55.002210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:55.002665 systemd[1]: kubelet.service: Consumed 252ms CPU time, 112.9M memory peak. Sep 8 23:52:56.806031 containerd[1510]: time="2025-09-08T23:52:56.805950704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:56.808040 containerd[1510]: time="2025-09-08T23:52:56.807986727Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 8 23:52:56.809214 containerd[1510]: time="2025-09-08T23:52:56.809175945Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:56.812818 containerd[1510]: time="2025-09-08T23:52:56.812747225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:56.813830 containerd[1510]: time="2025-09-08T23:52:56.813802989Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.249431183s" Sep 8 23:52:56.813830 containerd[1510]: time="2025-09-08T23:52:56.813829790Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 8 23:52:56.814918 containerd[1510]: time="2025-09-08T23:52:56.814883830Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:52:57.466128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482703285.mount: Deactivated successfully. Sep 8 23:52:57.479191 containerd[1510]: time="2025-09-08T23:52:57.479083355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:57.481839 containerd[1510]: time="2025-09-08T23:52:57.481716552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 8 23:52:57.483543 containerd[1510]: time="2025-09-08T23:52:57.483475593Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:57.487812 containerd[1510]: time="2025-09-08T23:52:57.487595752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:57.489364 containerd[1510]: time="2025-09-08T23:52:57.489309037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 674.388436ms" Sep 8 23:52:57.489364 containerd[1510]: time="2025-09-08T23:52:57.489362298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 8 23:52:57.489998 containerd[1510]: time="2025-09-08T23:52:57.489963403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 8 23:52:58.107166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318408045.mount: Deactivated successfully. Sep 8 23:53:00.066320 containerd[1510]: time="2025-09-08T23:53:00.066050917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:00.066769 containerd[1510]: time="2025-09-08T23:53:00.066673801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 8 23:53:00.068015 containerd[1510]: time="2025-09-08T23:53:00.067969853Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:00.071028 containerd[1510]: time="2025-09-08T23:53:00.070969142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:00.072214 containerd[1510]: time="2025-09-08T23:53:00.072174181Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.582177224s" Sep 8 23:53:00.072214 containerd[1510]: time="2025-09-08T23:53:00.072207234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 8 23:53:02.319469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:02.319725 systemd[1]: kubelet.service: Consumed 252ms CPU time, 112.9M memory peak. Sep 8 23:53:02.335411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:02.366962 systemd[1]: Reload requested from client PID 2125 ('systemctl') (unit session-5.scope)... Sep 8 23:53:02.366981 systemd[1]: Reloading... Sep 8 23:53:02.472132 zram_generator::config[2175]: No configuration found. Sep 8 23:53:02.861683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:53:02.967209 systemd[1]: Reloading finished in 599 ms. Sep 8 23:53:03.017295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:03.020947 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:03.023212 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:53:03.023498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:03.023535 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.2M memory peak. Sep 8 23:53:03.025176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:03.201110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:03.206172 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:53:03.252465 kubelet[2219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:03.252465 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 8 23:53:03.252465 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:03.252928 kubelet[2219]: I0908 23:53:03.252526 2219 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:53:03.473986 kubelet[2219]: I0908 23:53:03.473808 2219 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 8 23:53:03.473986 kubelet[2219]: I0908 23:53:03.473845 2219 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:53:03.474192 kubelet[2219]: I0908 23:53:03.474118 2219 server.go:934] "Client rotation is on, will bootstrap in background" Sep 8 23:53:03.499580 kubelet[2219]: E0908 23:53:03.499512 2219 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:03.500391 kubelet[2219]: I0908 23:53:03.500357 2219 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:53:03.508631 kubelet[2219]: E0908 23:53:03.508577 2219 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:53:03.508631 kubelet[2219]: I0908 23:53:03.508613 2219 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:53:03.515302 kubelet[2219]: I0908 23:53:03.515259 2219 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:53:03.516095 kubelet[2219]: I0908 23:53:03.516062 2219 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 8 23:53:03.516297 kubelet[2219]: I0908 23:53:03.516250 2219 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:53:03.516469 kubelet[2219]: I0908 23:53:03.516285 2219 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:53:03.516568 kubelet[2219]: I0908 23:53:03.516490 2219 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:53:03.516568 kubelet[2219]: I0908 23:53:03.516500 2219 container_manager_linux.go:300] "Creating device plugin manager" Sep 8 23:53:03.516710 kubelet[2219]: I0908 23:53:03.516690 2219 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:03.519635 kubelet[2219]: I0908 23:53:03.519604 2219 kubelet.go:408] "Attempting to sync node with API server" Sep 8 23:53:03.519635 kubelet[2219]: I0908 23:53:03.519631 2219 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:53:03.519696 kubelet[2219]: I0908 23:53:03.519684 2219 kubelet.go:314] "Adding apiserver pod source" Sep 8 23:53:03.519734 kubelet[2219]: I0908 23:53:03.519718 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:53:03.525132 kubelet[2219]: I0908 23:53:03.522290 2219 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:53:03.525132 kubelet[2219]: I0908 23:53:03.522711 2219 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:53:03.525132 kubelet[2219]: W0908 23:53:03.522774 2219 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:53:03.525132 kubelet[2219]: W0908 23:53:03.523250 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:03.525132 kubelet[2219]: E0908 23:53:03.523333 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:03.525132 kubelet[2219]: W0908 23:53:03.523332 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:03.525132 kubelet[2219]: E0908 23:53:03.523401 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:03.527123 kubelet[2219]: I0908 23:53:03.526935 2219 server.go:1274] "Started kubelet" Sep 8 23:53:03.527748 kubelet[2219]: I0908 23:53:03.527711 2219 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:53:03.528840 kubelet[2219]: I0908 23:53:03.528794 2219 server.go:449] "Adding debug handlers to kubelet server" Sep 8 23:53:03.534665 kubelet[2219]: I0908 23:53:03.533942 2219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:53:03.534665 kubelet[2219]: I0908 23:53:03.534450 2219 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:53:03.535364 kubelet[2219]: I0908 23:53:03.535333 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:53:03.535483 kubelet[2219]: I0908 23:53:03.535460 2219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:53:03.538125 kubelet[2219]: E0908 23:53:03.533742 2219 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186373c841107884 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:53:03.526877316 +0000 UTC m=+0.316800239,LastTimestamp:2025-09-08 23:53:03.526877316 +0000 UTC m=+0.316800239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:53:03.538125 kubelet[2219]: I0908 23:53:03.536236 2219 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 8 23:53:03.538125 kubelet[2219]: I0908 23:53:03.536328 2219 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 8 23:53:03.538125 kubelet[2219]: I0908 23:53:03.536399 2219 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:53:03.538125 kubelet[2219]: W0908 23:53:03.536690 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:03.538125 kubelet[2219]: E0908 23:53:03.536732 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:03.538125 kubelet[2219]: I0908 23:53:03.537005 2219 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:53:03.538416 kubelet[2219]: I0908 23:53:03.537082 2219 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:53:03.538416 kubelet[2219]: E0908 23:53:03.537616 2219 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:53:03.538416 kubelet[2219]: E0908 23:53:03.538016 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:03.538416 kubelet[2219]: E0908 23:53:03.538031 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Sep 8 23:53:03.538563 kubelet[2219]: I0908 23:53:03.538531 2219 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:53:03.588147 kubelet[2219]: I0908 23:53:03.588068 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:53:03.588551 kubelet[2219]: I0908 23:53:03.588522 2219 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 8 23:53:03.588551 kubelet[2219]: I0908 23:53:03.588545 2219 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 8 23:53:03.588619 kubelet[2219]: I0908 23:53:03.588568 2219 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:03.590068 kubelet[2219]: I0908 23:53:03.590038 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:53:03.590124 kubelet[2219]: I0908 23:53:03.590074 2219 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 8 23:53:03.590124 kubelet[2219]: I0908 23:53:03.590108 2219 kubelet.go:2321] "Starting kubelet main sync loop" Sep 8 23:53:03.590195 kubelet[2219]: E0908 23:53:03.590150 2219 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:53:03.638839 kubelet[2219]: E0908 23:53:03.638780 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:03.691030 kubelet[2219]: E0908 23:53:03.690979 2219 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:53:03.738765 kubelet[2219]: E0908 23:53:03.738651 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Sep 8 23:53:03.739713 kubelet[2219]: E0908 23:53:03.739667 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:03.840349 kubelet[2219]: E0908 23:53:03.840281 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:03.891551 kubelet[2219]: E0908 23:53:03.891490 2219 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:53:03.941080 kubelet[2219]: E0908 23:53:03.941037 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.042291 kubelet[2219]: E0908 23:53:04.042129 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.056302 update_engine[1496]: I20250908 23:53:04.056200 1496 update_attempter.cc:509] Updating boot flags... Sep 8 23:53:04.140130 kubelet[2219]: E0908 23:53:04.140040 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Sep 8 23:53:04.143163 kubelet[2219]: E0908 23:53:04.143125 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.243733 kubelet[2219]: E0908 23:53:04.243683 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.291956 kubelet[2219]: E0908 23:53:04.291897 2219 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:53:04.342853 kubelet[2219]: W0908 23:53:04.342796 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:04.342905 kubelet[2219]: E0908 23:53:04.342860 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:04.343849 kubelet[2219]: E0908 23:53:04.343813 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.444546 kubelet[2219]: E0908 23:53:04.444484 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.512086 kubelet[2219]: W0908 23:53:04.512004 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:04.512283 kubelet[2219]: E0908 23:53:04.512096 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:04.530470 kubelet[2219]: I0908 23:53:04.530403 2219 policy_none.go:49] "None policy: Start" Sep 8 23:53:04.531652 kubelet[2219]: I0908 23:53:04.531481 2219 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 8 23:53:04.531652 kubelet[2219]: I0908 23:53:04.531537 2219 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:53:04.545263 kubelet[2219]: E0908 23:53:04.544990 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.545285 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:53:04.547125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2261) Sep 8 23:53:04.563828 kubelet[2219]: W0908 23:53:04.563751 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:04.563926 kubelet[2219]: E0908 23:53:04.563832 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:04.589254 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:53:04.590124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2245) Sep 8 23:53:04.635531 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:53:04.645755 kubelet[2219]: E0908 23:53:04.645711 2219 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:04.665367 kubelet[2219]: I0908 23:53:04.665307 2219 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:53:04.665625 kubelet[2219]: I0908 23:53:04.665591 2219 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:53:04.665666 kubelet[2219]: I0908 23:53:04.665610 2219 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:53:04.666265 kubelet[2219]: I0908 23:53:04.665944 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:53:04.667412 kubelet[2219]: E0908 23:53:04.667382 2219 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:53:04.762443 kubelet[2219]: W0908 23:53:04.762341 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:04.762599 kubelet[2219]: E0908 23:53:04.762448 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:04.767992 kubelet[2219]: I0908 23:53:04.767935 2219 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:53:04.768258 kubelet[2219]: E0908 23:53:04.768213 2219 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 8 23:53:04.940834 kubelet[2219]: E0908 23:53:04.940660 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Sep 8 23:53:04.970047 kubelet[2219]: I0908 23:53:04.969992 2219 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:53:04.970347 kubelet[2219]: E0908 23:53:04.970307 2219 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 8 23:53:05.106821 systemd[1]: Created slice kubepods-burstable-pod0bcaf251b9183ea2028f00c9e28ecbd8.slice - libcontainer container kubepods-burstable-pod0bcaf251b9183ea2028f00c9e28ecbd8.slice. Sep 8 23:53:05.122280 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 8 23:53:05.136304 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 8 23:53:05.145307 kubelet[2219]: I0908 23:53:05.145274 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bcaf251b9183ea2028f00c9e28ecbd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bcaf251b9183ea2028f00c9e28ecbd8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:05.145399 kubelet[2219]: I0908 23:53:05.145308 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bcaf251b9183ea2028f00c9e28ecbd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0bcaf251b9183ea2028f00c9e28ecbd8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:05.145399 kubelet[2219]: I0908 23:53:05.145326 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:05.145399 kubelet[2219]: I0908 23:53:05.145340 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:05.145399 kubelet[2219]: I0908 23:53:05.145356 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:05.145399 kubelet[2219]: I0908 23:53:05.145369 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bcaf251b9183ea2028f00c9e28ecbd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bcaf251b9183ea2028f00c9e28ecbd8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:05.145569 kubelet[2219]: I0908 23:53:05.145383 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:05.145569 kubelet[2219]: I0908 23:53:05.145420 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:05.145569 kubelet[2219]: I0908 23:53:05.145466 2219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:05.372356 kubelet[2219]: I0908 23:53:05.372322 2219 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:53:05.372710 kubelet[2219]: E0908 23:53:05.372674 2219 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 8 23:53:05.373994 kubelet[2219]: W0908 23:53:05.373937 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:05.374046 kubelet[2219]: E0908 23:53:05.373998 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:05.420554 kubelet[2219]: E0908 23:53:05.420500 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:05.421174 containerd[1510]: time="2025-09-08T23:53:05.421130285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0bcaf251b9183ea2028f00c9e28ecbd8,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:05.432502 kubelet[2219]: E0908 23:53:05.432469 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:05.432819 containerd[1510]: time="2025-09-08T23:53:05.432785691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:05.439350 kubelet[2219]: E0908 23:53:05.439322 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:05.439782 containerd[1510]: time="2025-09-08T23:53:05.439750216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:05.553545 kubelet[2219]: E0908 23:53:05.553500 2219 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:06.174882 kubelet[2219]: I0908 23:53:06.174831 2219 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:53:06.175201 kubelet[2219]: E0908 23:53:06.175173 2219 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 8 23:53:06.384530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419393814.mount: Deactivated successfully. Sep 8 23:53:06.391213 containerd[1510]: time="2025-09-08T23:53:06.391159871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:06.393991 containerd[1510]: time="2025-09-08T23:53:06.393918952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 8 23:53:06.394970 containerd[1510]: time="2025-09-08T23:53:06.394933431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:06.396967 containerd[1510]: time="2025-09-08T23:53:06.396916153Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:06.397679 containerd[1510]: time="2025-09-08T23:53:06.397579508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:53:06.398590 containerd[1510]: time="2025-09-08T23:53:06.398561476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:06.399466 containerd[1510]: time="2025-09-08T23:53:06.399437184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:53:06.400332 containerd[1510]: time="2025-09-08T23:53:06.400293544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:06.401279 containerd[1510]: time="2025-09-08T23:53:06.401240936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 961.425436ms" Sep 8 23:53:06.403786 containerd[1510]: time="2025-09-08T23:53:06.403756817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 970.905132ms" Sep 8 23:53:06.407123 containerd[1510]: time="2025-09-08T23:53:06.407082390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 985.838128ms" Sep 8 23:53:06.551303 kubelet[2219]: E0908 23:53:06.551135 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="3.2s" Sep 8 23:53:06.726030 kubelet[2219]: W0908 23:53:06.725956 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:06.726030 kubelet[2219]: E0908 23:53:06.726034 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:06.757548 kubelet[2219]: W0908 23:53:06.757467 2219 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 8 23:53:06.757548 kubelet[2219]: E0908 23:53:06.757545 2219 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:06.876004 containerd[1510]: time="2025-09-08T23:53:06.875628891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:06.876004 containerd[1510]: time="2025-09-08T23:53:06.875565250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:06.876004 containerd[1510]: time="2025-09-08T23:53:06.875626787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:06.876004 containerd[1510]: time="2025-09-08T23:53:06.875641715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:06.876004 containerd[1510]: time="2025-09-08T23:53:06.875760810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:06.876632 containerd[1510]: time="2025-09-08T23:53:06.876417072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:06.876632 containerd[1510]: time="2025-09-08T23:53:06.876474060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:06.876692 containerd[1510]: time="2025-09-08T23:53:06.876646336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:06.887937 containerd[1510]: time="2025-09-08T23:53:06.886776634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:06.891227 containerd[1510]: time="2025-09-08T23:53:06.891161411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:06.891227 containerd[1510]: time="2025-09-08T23:53:06.891186227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:06.891354 containerd[1510]: time="2025-09-08T23:53:06.891270708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:06.907331 systemd[1]: Started cri-containerd-1971751df437101d89908434befd269b469986ac9d0bccdd508bc858b4145700.scope - libcontainer container 1971751df437101d89908434befd269b469986ac9d0bccdd508bc858b4145700. Sep 8 23:53:06.911199 systemd[1]: Started cri-containerd-380cc6cb0b471121d53d24f58178cb4f38b3221ddc27e9fe80f39a8d5f881946.scope - libcontainer container 380cc6cb0b471121d53d24f58178cb4f38b3221ddc27e9fe80f39a8d5f881946. Sep 8 23:53:06.936245 systemd[1]: Started cri-containerd-edc19a77b9d4807700b561c5c4c0ac415214f59b9d276e971100e71ef55ddb42.scope - libcontainer container edc19a77b9d4807700b561c5c4c0ac415214f59b9d276e971100e71ef55ddb42. Sep 8 23:53:06.981429 containerd[1510]: time="2025-09-08T23:53:06.981390698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1971751df437101d89908434befd269b469986ac9d0bccdd508bc858b4145700\"" Sep 8 23:53:06.982832 kubelet[2219]: E0908 23:53:06.982628 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:06.985148 containerd[1510]: time="2025-09-08T23:53:06.985092793Z" level=info msg="CreateContainer within sandbox \"1971751df437101d89908434befd269b469986ac9d0bccdd508bc858b4145700\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:53:06.990571 containerd[1510]: time="2025-09-08T23:53:06.990539199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0bcaf251b9183ea2028f00c9e28ecbd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"380cc6cb0b471121d53d24f58178cb4f38b3221ddc27e9fe80f39a8d5f881946\"" Sep 8 23:53:06.991255 kubelet[2219]: E0908 23:53:06.991173 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:06.994080 containerd[1510]: time="2025-09-08T23:53:06.994001270Z" level=info msg="CreateContainer within sandbox \"380cc6cb0b471121d53d24f58178cb4f38b3221ddc27e9fe80f39a8d5f881946\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:53:07.092934 containerd[1510]: time="2025-09-08T23:53:07.092890619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"edc19a77b9d4807700b561c5c4c0ac415214f59b9d276e971100e71ef55ddb42\"" Sep 8 23:53:07.093609 kubelet[2219]: E0908 23:53:07.093584 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:07.093857 containerd[1510]: time="2025-09-08T23:53:07.093825507Z" level=info msg="CreateContainer within sandbox \"1971751df437101d89908434befd269b469986ac9d0bccdd508bc858b4145700\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"244fdb22a208e9c9f38a0923e0cd960df760a3019670cc4164e480ae4a89f0ac\"" Sep 8 23:53:07.094275 containerd[1510]: time="2025-09-08T23:53:07.094252545Z" level=info msg="StartContainer for \"244fdb22a208e9c9f38a0923e0cd960df760a3019670cc4164e480ae4a89f0ac\"" Sep 8 23:53:07.094940 containerd[1510]: time="2025-09-08T23:53:07.094906371Z" level=info msg="CreateContainer within sandbox \"edc19a77b9d4807700b561c5c4c0ac415214f59b9d276e971100e71ef55ddb42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:53:07.096022 containerd[1510]: time="2025-09-08T23:53:07.095986654Z" level=info msg="CreateContainer within sandbox \"380cc6cb0b471121d53d24f58178cb4f38b3221ddc27e9fe80f39a8d5f881946\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f080df3f79e274d8c2cc878f50deb4d7202308c55db36f1986965febd796970\"" Sep 8 23:53:07.103980 containerd[1510]: time="2025-09-08T23:53:07.103932640Z" level=info msg="StartContainer for \"0f080df3f79e274d8c2cc878f50deb4d7202308c55db36f1986965febd796970\"" Sep 8 23:53:07.110485 containerd[1510]: time="2025-09-08T23:53:07.110432553Z" level=info msg="CreateContainer within sandbox \"edc19a77b9d4807700b561c5c4c0ac415214f59b9d276e971100e71ef55ddb42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d34f47fc9c38c36eefc898b83fbf6e83e6c48a37eed2729ebd9ca46cdc8960d0\"" Sep 8 23:53:07.111026 containerd[1510]: time="2025-09-08T23:53:07.111002281Z" level=info msg="StartContainer for \"d34f47fc9c38c36eefc898b83fbf6e83e6c48a37eed2729ebd9ca46cdc8960d0\"" Sep 8 23:53:07.125465 systemd[1]: Started cri-containerd-244fdb22a208e9c9f38a0923e0cd960df760a3019670cc4164e480ae4a89f0ac.scope - libcontainer container 244fdb22a208e9c9f38a0923e0cd960df760a3019670cc4164e480ae4a89f0ac. Sep 8 23:53:07.141234 systemd[1]: Started cri-containerd-0f080df3f79e274d8c2cc878f50deb4d7202308c55db36f1986965febd796970.scope - libcontainer container 0f080df3f79e274d8c2cc878f50deb4d7202308c55db36f1986965febd796970. Sep 8 23:53:07.147239 systemd[1]: Started cri-containerd-d34f47fc9c38c36eefc898b83fbf6e83e6c48a37eed2729ebd9ca46cdc8960d0.scope - libcontainer container d34f47fc9c38c36eefc898b83fbf6e83e6c48a37eed2729ebd9ca46cdc8960d0. Sep 8 23:53:07.198418 containerd[1510]: time="2025-09-08T23:53:07.198234147Z" level=info msg="StartContainer for \"244fdb22a208e9c9f38a0923e0cd960df760a3019670cc4164e480ae4a89f0ac\" returns successfully" Sep 8 23:53:07.205066 containerd[1510]: time="2025-09-08T23:53:07.204520365Z" level=info msg="StartContainer for \"d34f47fc9c38c36eefc898b83fbf6e83e6c48a37eed2729ebd9ca46cdc8960d0\" returns successfully" Sep 8 23:53:07.213169 containerd[1510]: time="2025-09-08T23:53:07.212914729Z" level=info msg="StartContainer for \"0f080df3f79e274d8c2cc878f50deb4d7202308c55db36f1986965febd796970\" returns successfully" Sep 8 23:53:07.604553 kubelet[2219]: E0908 23:53:07.604510 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:07.607419 kubelet[2219]: E0908 23:53:07.607388 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:07.608142 kubelet[2219]: E0908 23:53:07.608019 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:07.777137 kubelet[2219]: I0908 23:53:07.777037 2219 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:53:08.555227 kubelet[2219]: I0908 23:53:08.555161 2219 apiserver.go:52] "Watching apiserver" Sep 8 23:53:08.582584 kubelet[2219]: I0908 23:53:08.582520 2219 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 8 23:53:08.582584 kubelet[2219]: E0908 23:53:08.582562 2219 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:53:08.583822 kubelet[2219]: E0908 23:53:08.583698 2219 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186373c841107884 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:53:03.526877316 +0000 UTC m=+0.316800239,LastTimestamp:2025-09-08 23:53:03.526877316 +0000 UTC m=+0.316800239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:53:08.636639 kubelet[2219]: I0908 23:53:08.636577 2219 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 8 23:53:08.655132 kubelet[2219]: E0908 23:53:08.654329 2219 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186373c841b434d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:53:03.537607897 +0000 UTC m=+0.327530810,LastTimestamp:2025-09-08 23:53:03.537607897 +0000 UTC m=+0.327530810,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:53:08.655132 kubelet[2219]: E0908 23:53:08.654544 2219 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:08.655132 kubelet[2219]: E0908 23:53:08.654727 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:10.680258 systemd[1]: Reload requested from client PID 2521 ('systemctl') (unit session-5.scope)... Sep 8 23:53:10.680275 systemd[1]: Reloading... Sep 8 23:53:10.767135 zram_generator::config[2565]: No configuration found. Sep 8 23:53:10.885743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:53:11.005693 systemd[1]: Reloading finished in 325 ms. Sep 8 23:53:11.034709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:11.050763 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:53:11.051146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:11.051203 systemd[1]: kubelet.service: Consumed 962ms CPU time, 133.3M memory peak. Sep 8 23:53:11.067614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:11.298061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:11.311964 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:53:11.372332 kubelet[2610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:11.372332 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 8 23:53:11.372332 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:11.372840 kubelet[2610]: I0908 23:53:11.372376 2610 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:53:11.380267 kubelet[2610]: I0908 23:53:11.380208 2610 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 8 23:53:11.380267 kubelet[2610]: I0908 23:53:11.380270 2610 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:53:11.380717 kubelet[2610]: I0908 23:53:11.380683 2610 server.go:934] "Client rotation is on, will bootstrap in background" Sep 8 23:53:11.382604 kubelet[2610]: I0908 23:53:11.382547 2610 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 8 23:53:11.385982 kubelet[2610]: I0908 23:53:11.385918 2610 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:53:11.390632 kubelet[2610]: E0908 23:53:11.390583 2610 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:53:11.390632 kubelet[2610]: I0908 23:53:11.390617 2610 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:53:11.400776 kubelet[2610]: I0908 23:53:11.400712 2610 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:53:11.401122 kubelet[2610]: I0908 23:53:11.400963 2610 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 8 23:53:11.401294 kubelet[2610]: I0908 23:53:11.401236 2610 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:53:11.401496 kubelet[2610]: I0908 23:53:11.401277 2610 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:53:11.401603 kubelet[2610]: I0908 23:53:11.401501 2610 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:53:11.401603 kubelet[2610]: I0908 23:53:11.401511 2610 container_manager_linux.go:300] "Creating device plugin manager" Sep 8 23:53:11.401603 kubelet[2610]: I0908 23:53:11.401543 2610 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:11.401701 kubelet[2610]: I0908 23:53:11.401684 2610 kubelet.go:408] "Attempting to sync node with API server" Sep 8 23:53:11.401701 kubelet[2610]: I0908 23:53:11.401698 2610 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:53:11.401763 kubelet[2610]: I0908 23:53:11.401743 2610 kubelet.go:314] "Adding apiserver pod source" Sep 8 23:53:11.401763 kubelet[2610]: I0908 23:53:11.401759 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:53:11.402903 kubelet[2610]: I0908 23:53:11.402609 2610 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:53:11.403474 kubelet[2610]: I0908 23:53:11.403436 2610 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:53:11.404034 kubelet[2610]: I0908 23:53:11.404009 2610 server.go:1274] "Started kubelet" Sep 8 23:53:11.408865 kubelet[2610]: I0908 23:53:11.406954 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:53:11.408865 kubelet[2610]: I0908 23:53:11.406967 2610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:53:11.408865 kubelet[2610]: I0908 23:53:11.406988 2610 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:53:11.408865 kubelet[2610]: I0908 23:53:11.407564 2610 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:53:11.409833 kubelet[2610]: I0908 23:53:11.409812 2610 server.go:449] "Adding debug handlers to kubelet server" Sep 8 23:53:11.412573 kubelet[2610]: I0908 23:53:11.410210 2610 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:53:11.415042 kubelet[2610]: I0908 23:53:11.414998 2610 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 8 23:53:11.415350 kubelet[2610]: I0908 23:53:11.415155 2610 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 8 23:53:11.415350 kubelet[2610]: I0908 23:53:11.415284 2610 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:53:11.415591 kubelet[2610]: E0908 23:53:11.415533 2610 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:11.415774 kubelet[2610]: E0908 23:53:11.415743 2610 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:53:11.417543 kubelet[2610]: I0908 23:53:11.417512 2610 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:53:11.418500 kubelet[2610]: I0908 23:53:11.418457 2610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:53:11.422494 kubelet[2610]: I0908 23:53:11.422453 2610 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:53:11.435086 kubelet[2610]: I0908 23:53:11.435029 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:53:11.437283 kubelet[2610]: I0908 23:53:11.437257 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:53:11.437915 kubelet[2610]: I0908 23:53:11.437435 2610 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 8 23:53:11.437915 kubelet[2610]: I0908 23:53:11.437468 2610 kubelet.go:2321] "Starting kubelet main sync loop" Sep 8 23:53:11.437915 kubelet[2610]: E0908 23:53:11.437550 2610 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:53:11.474725 kubelet[2610]: I0908 23:53:11.474657 2610 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 8 23:53:11.474725 kubelet[2610]: I0908 23:53:11.474710 2610 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 8 23:53:11.474967 kubelet[2610]: I0908 23:53:11.474779 2610 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:11.475089 kubelet[2610]: I0908 23:53:11.475053 2610 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:53:11.475177 kubelet[2610]: I0908 23:53:11.475084 2610 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:53:11.475177 kubelet[2610]: I0908 23:53:11.475133 2610 policy_none.go:49] "None policy: Start" Sep 8 23:53:11.475915 kubelet[2610]: I0908 23:53:11.475880 2610 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 8 23:53:11.475915 kubelet[2610]: I0908 23:53:11.475910 2610 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:53:11.476091 kubelet[2610]: I0908 23:53:11.476077 2610 state_mem.go:75] "Updated machine memory state" Sep 8 23:53:11.481586 kubelet[2610]: I0908 23:53:11.481511 2610 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:53:11.481866 kubelet[2610]: I0908 23:53:11.481836 2610 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:53:11.481941 kubelet[2610]: I0908 23:53:11.481871 2610 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:53:11.482226 kubelet[2610]: I0908 23:53:11.482208 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:53:11.587976 kubelet[2610]: I0908 23:53:11.587897 2610 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:53:11.596820 kubelet[2610]: I0908 23:53:11.596764 2610 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 8 23:53:11.597002 kubelet[2610]: I0908 23:53:11.596884 2610 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 8 23:53:11.716133 kubelet[2610]: I0908 23:53:11.716054 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bcaf251b9183ea2028f00c9e28ecbd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0bcaf251b9183ea2028f00c9e28ecbd8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:11.716133 kubelet[2610]: I0908 23:53:11.716125 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:11.716133 kubelet[2610]: I0908 23:53:11.716149 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:11.716406 kubelet[2610]: I0908 23:53:11.716165 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bcaf251b9183ea2028f00c9e28ecbd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bcaf251b9183ea2028f00c9e28ecbd8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:11.716406 kubelet[2610]: I0908 23:53:11.716180 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bcaf251b9183ea2028f00c9e28ecbd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bcaf251b9183ea2028f00c9e28ecbd8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:11.716406 kubelet[2610]: I0908 23:53:11.716195 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:11.716406 kubelet[2610]: I0908 23:53:11.716222 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:11.716406 kubelet[2610]: I0908 23:53:11.716245 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:11.716549 kubelet[2610]: I0908 23:53:11.716264 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:11.847295 kubelet[2610]: E0908 23:53:11.846562 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:11.847295 kubelet[2610]: E0908 23:53:11.846630 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:11.847295 kubelet[2610]: E0908 23:53:11.846986 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:12.403031 kubelet[2610]: I0908 23:53:12.402980 2610 apiserver.go:52] "Watching apiserver" Sep 8 23:53:12.416214 kubelet[2610]: I0908 23:53:12.416183 2610 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 8 23:53:12.453321 kubelet[2610]: E0908 23:53:12.453231 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:12.589940 kubelet[2610]: E0908 23:53:12.589899 2610 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:12.590128 kubelet[2610]: E0908 23:53:12.590089 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:12.590747 kubelet[2610]: E0908 23:53:12.590209 2610 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:12.590747 kubelet[2610]: E0908 23:53:12.590305 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:12.590747 kubelet[2610]: I0908 23:53:12.589907 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.58987843 podStartE2EDuration="1.58987843s" podCreationTimestamp="2025-09-08 23:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:12.589685756 +0000 UTC m=+1.267896357" watchObservedRunningTime="2025-09-08 23:53:12.58987843 +0000 UTC m=+1.268089031" Sep 8 23:53:12.602275 kubelet[2610]: I0908 23:53:12.602135 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6020928269999999 podStartE2EDuration="1.602092827s" podCreationTimestamp="2025-09-08 23:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:12.600408178 +0000 UTC m=+1.278618789" watchObservedRunningTime="2025-09-08 23:53:12.602092827 +0000 UTC m=+1.280303428" Sep 8 23:53:12.607676 kubelet[2610]: I0908 23:53:12.607586 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6075161 podStartE2EDuration="1.6075161s" podCreationTimestamp="2025-09-08 23:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:12.607421963 +0000 UTC m=+1.285632564" watchObservedRunningTime="2025-09-08 23:53:12.6075161 +0000 UTC m=+1.285726701" Sep 8 23:53:13.005920 sudo[1651]: pam_unix(sudo:session): session closed for user root Sep 8 23:53:13.007410 sshd[1650]: Connection closed by 10.0.0.1 port 33094 Sep 8 23:53:13.008187 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:13.012247 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:33094.service: Deactivated successfully. Sep 8 23:53:13.014921 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:53:13.015191 systemd[1]: session-5.scope: Consumed 4.904s CPU time, 214.8M memory peak. Sep 8 23:53:13.016610 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:53:13.017650 systemd-logind[1495]: Removed session 5. Sep 8 23:53:13.454945 kubelet[2610]: E0908 23:53:13.454893 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:13.455463 kubelet[2610]: E0908 23:53:13.454958 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:15.802494 kubelet[2610]: I0908 23:53:15.802441 2610 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:53:15.802914 containerd[1510]: time="2025-09-08T23:53:15.802782204Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:53:15.803207 kubelet[2610]: I0908 23:53:15.802990 2610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:53:15.813694 kubelet[2610]: E0908 23:53:15.813665 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:16.222290 systemd[1]: Created slice kubepods-besteffort-pod2be36191_0227_44bd_9784_608a54bfe8d6.slice - libcontainer container kubepods-besteffort-pod2be36191_0227_44bd_9784_608a54bfe8d6.slice. Sep 8 23:53:16.235415 systemd[1]: Created slice kubepods-burstable-podbdc1ae7b_8133_4c6d_bb27_2a4928eb2402.slice - libcontainer container kubepods-burstable-podbdc1ae7b_8133_4c6d_bb27_2a4928eb2402.slice. Sep 8 23:53:16.243883 kubelet[2610]: I0908 23:53:16.243839 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-run\") pod \"kube-flannel-ds-dxftk\" (UID: \"bdc1ae7b-8133-4c6d-bb27-2a4928eb2402\") " pod="kube-flannel/kube-flannel-ds-dxftk" Sep 8 23:53:16.243883 kubelet[2610]: I0908 23:53:16.243884 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-xtables-lock\") pod \"kube-flannel-ds-dxftk\" (UID: \"bdc1ae7b-8133-4c6d-bb27-2a4928eb2402\") " pod="kube-flannel/kube-flannel-ds-dxftk" Sep 8 23:53:16.243883 kubelet[2610]: I0908 23:53:16.243902 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2be36191-0227-44bd-9784-608a54bfe8d6-lib-modules\") pod \"kube-proxy-4fzxv\" (UID: \"2be36191-0227-44bd-9784-608a54bfe8d6\") " pod="kube-system/kube-proxy-4fzxv" Sep 8 23:53:16.244153 kubelet[2610]: I0908 23:53:16.243918 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-flannel-cfg\") pod \"kube-flannel-ds-dxftk\" (UID: \"bdc1ae7b-8133-4c6d-bb27-2a4928eb2402\") " pod="kube-flannel/kube-flannel-ds-dxftk" Sep 8 23:53:16.244153 kubelet[2610]: I0908 23:53:16.243934 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgt9d\" (UniqueName: \"kubernetes.io/projected/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-kube-api-access-sgt9d\") pod \"kube-flannel-ds-dxftk\" (UID: \"bdc1ae7b-8133-4c6d-bb27-2a4928eb2402\") " pod="kube-flannel/kube-flannel-ds-dxftk" Sep 8 23:53:16.244153 kubelet[2610]: I0908 23:53:16.243950 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2be36191-0227-44bd-9784-608a54bfe8d6-kube-proxy\") pod \"kube-proxy-4fzxv\" (UID: \"2be36191-0227-44bd-9784-608a54bfe8d6\") " pod="kube-system/kube-proxy-4fzxv" Sep 8 23:53:16.244153 kubelet[2610]: I0908 23:53:16.243965 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2be36191-0227-44bd-9784-608a54bfe8d6-xtables-lock\") pod \"kube-proxy-4fzxv\" (UID: \"2be36191-0227-44bd-9784-608a54bfe8d6\") " pod="kube-system/kube-proxy-4fzxv" Sep 8 23:53:16.244153 kubelet[2610]: I0908 23:53:16.243983 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgbwk\" (UniqueName: \"kubernetes.io/projected/2be36191-0227-44bd-9784-608a54bfe8d6-kube-api-access-wgbwk\") pod \"kube-proxy-4fzxv\" (UID: \"2be36191-0227-44bd-9784-608a54bfe8d6\") " pod="kube-system/kube-proxy-4fzxv" Sep 8 23:53:16.244329 kubelet[2610]: I0908 23:53:16.243999 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-cni-plugin\") pod \"kube-flannel-ds-dxftk\" (UID: \"bdc1ae7b-8133-4c6d-bb27-2a4928eb2402\") " pod="kube-flannel/kube-flannel-ds-dxftk" Sep 8 23:53:16.244329 kubelet[2610]: I0908 23:53:16.244016 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-cni\") pod \"kube-flannel-ds-dxftk\" (UID: \"bdc1ae7b-8133-4c6d-bb27-2a4928eb2402\") " pod="kube-flannel/kube-flannel-ds-dxftk" Sep 8 23:53:16.352114 kubelet[2610]: E0908 23:53:16.352034 2610 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:53:16.352114 kubelet[2610]: E0908 23:53:16.352080 2610 projected.go:194] Error preparing data for projected volume kube-api-access-wgbwk for pod kube-system/kube-proxy-4fzxv: configmap "kube-root-ca.crt" not found Sep 8 23:53:16.352289 kubelet[2610]: E0908 23:53:16.352226 2610 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be36191-0227-44bd-9784-608a54bfe8d6-kube-api-access-wgbwk podName:2be36191-0227-44bd-9784-608a54bfe8d6 nodeName:}" failed. No retries permitted until 2025-09-08 23:53:16.852165681 +0000 UTC m=+5.530376282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wgbwk" (UniqueName: "kubernetes.io/projected/2be36191-0227-44bd-9784-608a54bfe8d6-kube-api-access-wgbwk") pod "kube-proxy-4fzxv" (UID: "2be36191-0227-44bd-9784-608a54bfe8d6") : configmap "kube-root-ca.crt" not found Sep 8 23:53:16.352289 kubelet[2610]: E0908 23:53:16.352272 2610 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:53:16.352391 kubelet[2610]: E0908 23:53:16.352302 2610 projected.go:194] Error preparing data for projected volume kube-api-access-sgt9d for pod kube-flannel/kube-flannel-ds-dxftk: configmap "kube-root-ca.crt" not found Sep 8 23:53:16.352391 kubelet[2610]: E0908 23:53:16.352348 2610 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-kube-api-access-sgt9d podName:bdc1ae7b-8133-4c6d-bb27-2a4928eb2402 nodeName:}" failed. No retries permitted until 2025-09-08 23:53:16.852330983 +0000 UTC m=+5.530541574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sgt9d" (UniqueName: "kubernetes.io/projected/bdc1ae7b-8133-4c6d-bb27-2a4928eb2402-kube-api-access-sgt9d") pod "kube-flannel-ds-dxftk" (UID: "bdc1ae7b-8133-4c6d-bb27-2a4928eb2402") : configmap "kube-root-ca.crt" not found Sep 8 23:53:17.134477 containerd[1510]: time="2025-09-08T23:53:17.134403098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4fzxv,Uid:2be36191-0227-44bd-9784-608a54bfe8d6,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:17.139481 containerd[1510]: time="2025-09-08T23:53:17.139150722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dxftk,Uid:bdc1ae7b-8133-4c6d-bb27-2a4928eb2402,Namespace:kube-flannel,Attempt:0,}" Sep 8 23:53:17.181325 containerd[1510]: time="2025-09-08T23:53:17.181016729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:17.181325 containerd[1510]: time="2025-09-08T23:53:17.181225792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:17.181325 containerd[1510]: time="2025-09-08T23:53:17.181249157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:17.182574 containerd[1510]: time="2025-09-08T23:53:17.182447573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:17.188094 containerd[1510]: time="2025-09-08T23:53:17.184871911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:17.188094 containerd[1510]: time="2025-09-08T23:53:17.184942865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:17.188094 containerd[1510]: time="2025-09-08T23:53:17.184961009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:17.188094 containerd[1510]: time="2025-09-08T23:53:17.185062990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:17.210537 systemd[1]: Started cri-containerd-e4b83c01addb524ba6e93f358cbf586eb5d478171f6da5c7de835662b3bbd676.scope - libcontainer container e4b83c01addb524ba6e93f358cbf586eb5d478171f6da5c7de835662b3bbd676. Sep 8 23:53:17.215298 systemd[1]: Started cri-containerd-2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0.scope - libcontainer container 2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0. Sep 8 23:53:17.241087 containerd[1510]: time="2025-09-08T23:53:17.241027958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4fzxv,Uid:2be36191-0227-44bd-9784-608a54bfe8d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b83c01addb524ba6e93f358cbf586eb5d478171f6da5c7de835662b3bbd676\"" Sep 8 23:53:17.244791 containerd[1510]: time="2025-09-08T23:53:17.244730012Z" level=info msg="CreateContainer within sandbox \"e4b83c01addb524ba6e93f358cbf586eb5d478171f6da5c7de835662b3bbd676\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:53:17.261624 containerd[1510]: time="2025-09-08T23:53:17.261558503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dxftk,Uid:bdc1ae7b-8133-4c6d-bb27-2a4928eb2402,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\"" Sep 8 23:53:17.263347 containerd[1510]: time="2025-09-08T23:53:17.263325511Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Sep 8 23:53:17.274741 containerd[1510]: time="2025-09-08T23:53:17.274681554Z" level=info msg="CreateContainer within sandbox \"e4b83c01addb524ba6e93f358cbf586eb5d478171f6da5c7de835662b3bbd676\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9347d94d43bbaaeb7910d0fc7d15579566d034a3d36dc76ea6a0a0e9ba8e488\"" Sep 8 23:53:17.276681 containerd[1510]: time="2025-09-08T23:53:17.275357207Z" level=info msg="StartContainer for \"c9347d94d43bbaaeb7910d0fc7d15579566d034a3d36dc76ea6a0a0e9ba8e488\"" Sep 8 23:53:17.303291 systemd[1]: Started cri-containerd-c9347d94d43bbaaeb7910d0fc7d15579566d034a3d36dc76ea6a0a0e9ba8e488.scope - libcontainer container c9347d94d43bbaaeb7910d0fc7d15579566d034a3d36dc76ea6a0a0e9ba8e488. Sep 8 23:53:17.347205 containerd[1510]: time="2025-09-08T23:53:17.347037468Z" level=info msg="StartContainer for \"c9347d94d43bbaaeb7910d0fc7d15579566d034a3d36dc76ea6a0a0e9ba8e488\" returns successfully" Sep 8 23:53:18.914887 kubelet[2610]: I0908 23:53:18.914815 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4fzxv" podStartSLOduration=2.914794022 podStartE2EDuration="2.914794022s" podCreationTimestamp="2025-09-08 23:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:17.473779283 +0000 UTC m=+6.151989884" watchObservedRunningTime="2025-09-08 23:53:18.914794022 +0000 UTC m=+7.593004623" Sep 8 23:53:19.015824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637615666.mount: Deactivated successfully. Sep 8 23:53:19.055412 containerd[1510]: time="2025-09-08T23:53:19.055334285Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:19.056050 containerd[1510]: time="2025-09-08T23:53:19.056001070Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Sep 8 23:53:19.057169 containerd[1510]: time="2025-09-08T23:53:19.057135957Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:19.059469 containerd[1510]: time="2025-09-08T23:53:19.059435266Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:19.060419 containerd[1510]: time="2025-09-08T23:53:19.060384033Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.79693598s" Sep 8 23:53:19.060473 containerd[1510]: time="2025-09-08T23:53:19.060419910Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Sep 8 23:53:19.062251 containerd[1510]: time="2025-09-08T23:53:19.062222935Z" level=info msg="CreateContainer within sandbox \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Sep 8 23:53:19.074704 containerd[1510]: time="2025-09-08T23:53:19.074655643Z" level=info msg="CreateContainer within sandbox \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c\"" Sep 8 23:53:19.075198 containerd[1510]: time="2025-09-08T23:53:19.075173358Z" level=info msg="StartContainer for \"c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c\"" Sep 8 23:53:19.114277 systemd[1]: Started cri-containerd-c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c.scope - libcontainer container c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c. Sep 8 23:53:19.148713 systemd[1]: cri-containerd-c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c.scope: Deactivated successfully. Sep 8 23:53:19.159918 containerd[1510]: time="2025-09-08T23:53:19.159873814Z" level=info msg="StartContainer for \"c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c\" returns successfully" Sep 8 23:53:19.207815 containerd[1510]: time="2025-09-08T23:53:19.207640220Z" level=info msg="shim disconnected" id=c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c namespace=k8s.io Sep 8 23:53:19.207815 containerd[1510]: time="2025-09-08T23:53:19.207716903Z" level=warning msg="cleaning up after shim disconnected" id=c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c namespace=k8s.io Sep 8 23:53:19.207815 containerd[1510]: time="2025-09-08T23:53:19.207735258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:53:19.468845 containerd[1510]: time="2025-09-08T23:53:19.468684968Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Sep 8 23:53:20.015368 systemd[1]: run-containerd-runc-k8s.io-c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c-runc.DxP489.mount: Deactivated successfully. Sep 8 23:53:20.015490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5b068acb1fa60eaf3012a15af82a256e64caebd6597cdace17c978c63f03c1c-rootfs.mount: Deactivated successfully. Sep 8 23:53:21.424736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685963969.mount: Deactivated successfully. Sep 8 23:53:23.121373 containerd[1510]: time="2025-09-08T23:53:23.121298584Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:23.122167 containerd[1510]: time="2025-09-08T23:53:23.122071739Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Sep 8 23:53:23.123504 containerd[1510]: time="2025-09-08T23:53:23.123467294Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:23.127078 containerd[1510]: time="2025-09-08T23:53:23.127003166Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:23.128210 containerd[1510]: time="2025-09-08T23:53:23.128174469Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.6594418s" Sep 8 23:53:23.128275 containerd[1510]: time="2025-09-08T23:53:23.128216478Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Sep 8 23:53:23.130970 containerd[1510]: time="2025-09-08T23:53:23.130905447Z" level=info msg="CreateContainer within sandbox \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 8 23:53:23.146254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296053706.mount: Deactivated successfully. Sep 8 23:53:23.149451 containerd[1510]: time="2025-09-08T23:53:23.149404938Z" level=info msg="CreateContainer within sandbox \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a\"" Sep 8 23:53:23.150060 containerd[1510]: time="2025-09-08T23:53:23.150013302Z" level=info msg="StartContainer for \"2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a\"" Sep 8 23:53:23.188264 systemd[1]: Started cri-containerd-2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a.scope - libcontainer container 2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a. Sep 8 23:53:23.216727 systemd[1]: cri-containerd-2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a.scope: Deactivated successfully. Sep 8 23:53:23.283971 containerd[1510]: time="2025-09-08T23:53:23.283908416Z" level=info msg="StartContainer for \"2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a\" returns successfully" Sep 8 23:53:23.300894 kubelet[2610]: I0908 23:53:23.300348 2610 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 8 23:53:23.317624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a-rootfs.mount: Deactivated successfully. Sep 8 23:53:23.336365 systemd[1]: Created slice kubepods-burstable-pode1024bf1_b6d3_4572_9f9a_a52c59f9ff72.slice - libcontainer container kubepods-burstable-pode1024bf1_b6d3_4572_9f9a_a52c59f9ff72.slice. Sep 8 23:53:23.341349 systemd[1]: Created slice kubepods-burstable-pod012195e3_9cf7_4757_90c7_7bfdc3bffe85.slice - libcontainer container kubepods-burstable-pod012195e3_9cf7_4757_90c7_7bfdc3bffe85.slice. Sep 8 23:53:23.488177 kubelet[2610]: I0908 23:53:23.488015 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6rc\" (UniqueName: \"kubernetes.io/projected/012195e3-9cf7-4757-90c7-7bfdc3bffe85-kube-api-access-xz6rc\") pod \"coredns-7c65d6cfc9-685g8\" (UID: \"012195e3-9cf7-4757-90c7-7bfdc3bffe85\") " pod="kube-system/coredns-7c65d6cfc9-685g8" Sep 8 23:53:23.488177 kubelet[2610]: I0908 23:53:23.488053 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz79j\" (UniqueName: \"kubernetes.io/projected/e1024bf1-b6d3-4572-9f9a-a52c59f9ff72-kube-api-access-zz79j\") pod \"coredns-7c65d6cfc9-z6znf\" (UID: \"e1024bf1-b6d3-4572-9f9a-a52c59f9ff72\") " pod="kube-system/coredns-7c65d6cfc9-z6znf" Sep 8 23:53:23.488177 kubelet[2610]: I0908 23:53:23.488070 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/012195e3-9cf7-4757-90c7-7bfdc3bffe85-config-volume\") pod \"coredns-7c65d6cfc9-685g8\" (UID: \"012195e3-9cf7-4757-90c7-7bfdc3bffe85\") " pod="kube-system/coredns-7c65d6cfc9-685g8" Sep 8 23:53:23.488177 kubelet[2610]: I0908 23:53:23.488087 2610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1024bf1-b6d3-4572-9f9a-a52c59f9ff72-config-volume\") pod \"coredns-7c65d6cfc9-z6znf\" (UID: \"e1024bf1-b6d3-4572-9f9a-a52c59f9ff72\") " pod="kube-system/coredns-7c65d6cfc9-z6znf" Sep 8 23:53:23.718939 containerd[1510]: time="2025-09-08T23:53:23.718655382Z" level=info msg="shim disconnected" id=2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a namespace=k8s.io Sep 8 23:53:23.718939 containerd[1510]: time="2025-09-08T23:53:23.718722940Z" level=warning msg="cleaning up after shim disconnected" id=2172392a31830930d0bb03d0639787ba3ddcaf61ad00e41597c6b3fcec23379a namespace=k8s.io Sep 8 23:53:23.718939 containerd[1510]: time="2025-09-08T23:53:23.718732628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:53:23.941019 containerd[1510]: time="2025-09-08T23:53:23.940955984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6znf,Uid:e1024bf1-b6d3-4572-9f9a-a52c59f9ff72,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:23.946593 containerd[1510]: time="2025-09-08T23:53:23.946560558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-685g8,Uid:012195e3-9cf7-4757-90c7-7bfdc3bffe85,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:24.064374 containerd[1510]: time="2025-09-08T23:53:24.064306280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-685g8,Uid:012195e3-9cf7-4757-90c7-7bfdc3bffe85,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eafb07872ce2c2b5a4a04433c6dca0a4277005846f855ae0da7d0b4b3bba4dae\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:53:24.064662 kubelet[2610]: E0908 23:53:24.064622 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eafb07872ce2c2b5a4a04433c6dca0a4277005846f855ae0da7d0b4b3bba4dae\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:53:24.064737 kubelet[2610]: E0908 23:53:24.064713 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eafb07872ce2c2b5a4a04433c6dca0a4277005846f855ae0da7d0b4b3bba4dae\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-685g8" Sep 8 23:53:24.064767 kubelet[2610]: E0908 23:53:24.064745 2610 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eafb07872ce2c2b5a4a04433c6dca0a4277005846f855ae0da7d0b4b3bba4dae\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-685g8" Sep 8 23:53:24.064840 kubelet[2610]: E0908 23:53:24.064807 2610 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-685g8_kube-system(012195e3-9cf7-4757-90c7-7bfdc3bffe85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-685g8_kube-system(012195e3-9cf7-4757-90c7-7bfdc3bffe85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eafb07872ce2c2b5a4a04433c6dca0a4277005846f855ae0da7d0b4b3bba4dae\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-685g8" podUID="012195e3-9cf7-4757-90c7-7bfdc3bffe85" Sep 8 23:53:24.065395 containerd[1510]: time="2025-09-08T23:53:24.065358128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6znf,Uid:e1024bf1-b6d3-4572-9f9a-a52c59f9ff72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"facd3a57bb61567a23a79244477a57380c0dbb0b3f09b1ecc61a74cc3de51b47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:53:24.065529 kubelet[2610]: E0908 23:53:24.065506 2610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"facd3a57bb61567a23a79244477a57380c0dbb0b3f09b1ecc61a74cc3de51b47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:53:24.065575 kubelet[2610]: E0908 23:53:24.065540 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"facd3a57bb61567a23a79244477a57380c0dbb0b3f09b1ecc61a74cc3de51b47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-z6znf" Sep 8 23:53:24.065575 kubelet[2610]: E0908 23:53:24.065561 2610 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"facd3a57bb61567a23a79244477a57380c0dbb0b3f09b1ecc61a74cc3de51b47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-z6znf" Sep 8 23:53:24.065626 kubelet[2610]: E0908 23:53:24.065601 2610 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-z6znf_kube-system(e1024bf1-b6d3-4572-9f9a-a52c59f9ff72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-z6znf_kube-system(e1024bf1-b6d3-4572-9f9a-a52c59f9ff72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"facd3a57bb61567a23a79244477a57380c0dbb0b3f09b1ecc61a74cc3de51b47\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-z6znf" podUID="e1024bf1-b6d3-4572-9f9a-a52c59f9ff72" Sep 8 23:53:24.482113 containerd[1510]: time="2025-09-08T23:53:24.482016754Z" level=info msg="CreateContainer within sandbox \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Sep 8 23:53:24.501719 containerd[1510]: time="2025-09-08T23:53:24.501659769Z" level=info msg="CreateContainer within sandbox \"2c1686a4fbe0699b378856823f2152399925db05753786cdb19976a4343898a0\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"092c906492d639b7922eb3d7d61f9721a41a5fecd73cc5273c998ea16a12336d\"" Sep 8 23:53:24.502282 containerd[1510]: time="2025-09-08T23:53:24.502243267Z" level=info msg="StartContainer for \"092c906492d639b7922eb3d7d61f9721a41a5fecd73cc5273c998ea16a12336d\"" Sep 8 23:53:24.541276 systemd[1]: Started cri-containerd-092c906492d639b7922eb3d7d61f9721a41a5fecd73cc5273c998ea16a12336d.scope - libcontainer container 092c906492d639b7922eb3d7d61f9721a41a5fecd73cc5273c998ea16a12336d. Sep 8 23:53:24.571687 containerd[1510]: time="2025-09-08T23:53:24.571603323Z" level=info msg="StartContainer for \"092c906492d639b7922eb3d7d61f9721a41a5fecd73cc5273c998ea16a12336d\" returns successfully" Sep 8 23:53:25.608472 kubelet[2610]: I0908 23:53:25.608278 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dxftk" podStartSLOduration=3.74183091 podStartE2EDuration="9.608258379s" podCreationTimestamp="2025-09-08 23:53:16 +0000 UTC" firstStartedPulling="2025-09-08 23:53:17.262855085 +0000 UTC m=+5.941065686" lastFinishedPulling="2025-09-08 23:53:23.129282554 +0000 UTC m=+11.807493155" observedRunningTime="2025-09-08 23:53:25.608166427 +0000 UTC m=+14.286377038" watchObservedRunningTime="2025-09-08 23:53:25.608258379 +0000 UTC m=+14.286468980" Sep 8 23:53:25.623978 systemd-networkd[1415]: flannel.1: Link UP Sep 8 23:53:25.623995 systemd-networkd[1415]: flannel.1: Gained carrier Sep 8 23:53:27.064348 systemd-networkd[1415]: flannel.1: Gained IPv6LL Sep 8 23:53:35.962973 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:53172.service - OpenSSH per-connection server daemon (10.0.0.1:53172). Sep 8 23:53:36.042603 sshd[3299]: Accepted publickey for core from 10.0.0.1 port 53172 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:36.044723 sshd-session[3299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:36.050848 systemd-logind[1495]: New session 6 of user core. Sep 8 23:53:36.058303 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:53:36.210566 sshd[3301]: Connection closed by 10.0.0.1 port 53172 Sep 8 23:53:36.211879 sshd-session[3299]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:36.218219 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:53172.service: Deactivated successfully. Sep 8 23:53:36.221094 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:53:36.222483 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:53:36.225506 systemd-logind[1495]: Removed session 6. Sep 8 23:53:36.439484 containerd[1510]: time="2025-09-08T23:53:36.439421931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6znf,Uid:e1024bf1-b6d3-4572-9f9a-a52c59f9ff72,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:36.660316 systemd-networkd[1415]: cni0: Link UP Sep 8 23:53:36.660331 systemd-networkd[1415]: cni0: Gained carrier Sep 8 23:53:36.665499 systemd-networkd[1415]: cni0: Lost carrier Sep 8 23:53:36.677646 systemd-networkd[1415]: veth1be9dae1: Link UP Sep 8 23:53:36.680855 kernel: cni0: port 1(veth1be9dae1) entered blocking state Sep 8 23:53:36.680963 kernel: cni0: port 1(veth1be9dae1) entered disabled state Sep 8 23:53:36.680991 kernel: veth1be9dae1: entered allmulticast mode Sep 8 23:53:36.685708 kernel: veth1be9dae1: entered promiscuous mode Sep 8 23:53:36.685761 kernel: cni0: port 1(veth1be9dae1) entered blocking state Sep 8 23:53:36.685796 kernel: cni0: port 1(veth1be9dae1) entered forwarding state Sep 8 23:53:36.685818 kernel: cni0: port 1(veth1be9dae1) entered disabled state Sep 8 23:53:36.693152 kernel: cni0: port 1(veth1be9dae1) entered blocking state Sep 8 23:53:36.693225 kernel: cni0: port 1(veth1be9dae1) entered forwarding state Sep 8 23:53:36.693158 systemd-networkd[1415]: veth1be9dae1: Gained carrier Sep 8 23:53:36.693575 systemd-networkd[1415]: cni0: Gained carrier Sep 8 23:53:36.696408 containerd[1510]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000020938), "name":"cbr0", "type":"bridge"} Sep 8 23:53:36.696408 containerd[1510]: delegateAdd: netconf sent to delegate plugin: Sep 8 23:53:36.730708 containerd[1510]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-08T23:53:36.730555574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:36.730708 containerd[1510]: time="2025-09-08T23:53:36.730648780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:36.730977 containerd[1510]: time="2025-09-08T23:53:36.730679458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:36.730977 containerd[1510]: time="2025-09-08T23:53:36.730786348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:36.766471 systemd[1]: Started cri-containerd-aad2d96531cf6697e9a690ba9b8005ac109e60f6cdfb5df869c533d4bc09ee4c.scope - libcontainer container aad2d96531cf6697e9a690ba9b8005ac109e60f6cdfb5df869c533d4bc09ee4c. Sep 8 23:53:36.781885 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:53:36.811904 containerd[1510]: time="2025-09-08T23:53:36.811853620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6znf,Uid:e1024bf1-b6d3-4572-9f9a-a52c59f9ff72,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad2d96531cf6697e9a690ba9b8005ac109e60f6cdfb5df869c533d4bc09ee4c\"" Sep 8 23:53:36.815776 containerd[1510]: time="2025-09-08T23:53:36.815722826Z" level=info msg="CreateContainer within sandbox \"aad2d96531cf6697e9a690ba9b8005ac109e60f6cdfb5df869c533d4bc09ee4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:53:36.838045 containerd[1510]: time="2025-09-08T23:53:36.837961663Z" level=info msg="CreateContainer within sandbox \"aad2d96531cf6697e9a690ba9b8005ac109e60f6cdfb5df869c533d4bc09ee4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e28e48afde824e244e0420f76ae6b6ffdded1629d6766fd0615020fb17861291\"" Sep 8 23:53:36.838684 containerd[1510]: time="2025-09-08T23:53:36.838637742Z" level=info msg="StartContainer for \"e28e48afde824e244e0420f76ae6b6ffdded1629d6766fd0615020fb17861291\"" Sep 8 23:53:36.876511 systemd[1]: Started cri-containerd-e28e48afde824e244e0420f76ae6b6ffdded1629d6766fd0615020fb17861291.scope - libcontainer container e28e48afde824e244e0420f76ae6b6ffdded1629d6766fd0615020fb17861291. Sep 8 23:53:36.913408 containerd[1510]: time="2025-09-08T23:53:36.913267142Z" level=info msg="StartContainer for \"e28e48afde824e244e0420f76ae6b6ffdded1629d6766fd0615020fb17861291\" returns successfully" Sep 8 23:53:37.524550 kubelet[2610]: I0908 23:53:37.524474 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z6znf" podStartSLOduration=21.524454638 podStartE2EDuration="21.524454638s" podCreationTimestamp="2025-09-08 23:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:37.524305869 +0000 UTC m=+26.202516470" watchObservedRunningTime="2025-09-08 23:53:37.524454638 +0000 UTC m=+26.202665239" Sep 8 23:53:38.648351 systemd-networkd[1415]: cni0: Gained IPv6LL Sep 8 23:53:38.648816 systemd-networkd[1415]: veth1be9dae1: Gained IPv6LL Sep 8 23:53:39.439890 containerd[1510]: time="2025-09-08T23:53:39.439604894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-685g8,Uid:012195e3-9cf7-4757-90c7-7bfdc3bffe85,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:39.465913 systemd-networkd[1415]: veth6ba2fbe2: Link UP Sep 8 23:53:39.467358 kernel: cni0: port 2(veth6ba2fbe2) entered blocking state Sep 8 23:53:39.467411 kernel: cni0: port 2(veth6ba2fbe2) entered disabled state Sep 8 23:53:39.468642 kernel: veth6ba2fbe2: entered allmulticast mode Sep 8 23:53:39.468700 kernel: veth6ba2fbe2: entered promiscuous mode Sep 8 23:53:39.475991 kernel: cni0: port 2(veth6ba2fbe2) entered blocking state Sep 8 23:53:39.476053 kernel: cni0: port 2(veth6ba2fbe2) entered forwarding state Sep 8 23:53:39.476604 systemd-networkd[1415]: veth6ba2fbe2: Gained carrier Sep 8 23:53:39.479907 containerd[1510]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Sep 8 23:53:39.479907 containerd[1510]: delegateAdd: netconf sent to delegate plugin: Sep 8 23:53:39.506356 containerd[1510]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-08T23:53:39.506260411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:39.506356 containerd[1510]: time="2025-09-08T23:53:39.506309653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:39.506356 containerd[1510]: time="2025-09-08T23:53:39.506320073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:39.506623 containerd[1510]: time="2025-09-08T23:53:39.506396146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:39.538319 systemd[1]: Started cri-containerd-99eb276b31d31d8fb09f42f205213f0d68049cf11ddced0ce195e3b20b5c9fa2.scope - libcontainer container 99eb276b31d31d8fb09f42f205213f0d68049cf11ddced0ce195e3b20b5c9fa2. Sep 8 23:53:39.553565 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:53:39.589364 containerd[1510]: time="2025-09-08T23:53:39.589310631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-685g8,Uid:012195e3-9cf7-4757-90c7-7bfdc3bffe85,Namespace:kube-system,Attempt:0,} returns sandbox id \"99eb276b31d31d8fb09f42f205213f0d68049cf11ddced0ce195e3b20b5c9fa2\"" Sep 8 23:53:39.592307 containerd[1510]: time="2025-09-08T23:53:39.592265850Z" level=info msg="CreateContainer within sandbox \"99eb276b31d31d8fb09f42f205213f0d68049cf11ddced0ce195e3b20b5c9fa2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:53:39.619599 containerd[1510]: time="2025-09-08T23:53:39.619530245Z" level=info msg="CreateContainer within sandbox \"99eb276b31d31d8fb09f42f205213f0d68049cf11ddced0ce195e3b20b5c9fa2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fee614b8c8eca33b4a0bf64dfdc77d6ad93ce6ddf75b30f3041caa54eb5cabaf\"" Sep 8 23:53:39.620239 containerd[1510]: time="2025-09-08T23:53:39.620204901Z" level=info msg="StartContainer for \"fee614b8c8eca33b4a0bf64dfdc77d6ad93ce6ddf75b30f3041caa54eb5cabaf\"" Sep 8 23:53:39.655254 systemd[1]: Started cri-containerd-fee614b8c8eca33b4a0bf64dfdc77d6ad93ce6ddf75b30f3041caa54eb5cabaf.scope - libcontainer container fee614b8c8eca33b4a0bf64dfdc77d6ad93ce6ddf75b30f3041caa54eb5cabaf. Sep 8 23:53:39.689038 containerd[1510]: time="2025-09-08T23:53:39.688949981Z" level=info msg="StartContainer for \"fee614b8c8eca33b4a0bf64dfdc77d6ad93ce6ddf75b30f3041caa54eb5cabaf\" returns successfully" Sep 8 23:53:40.563381 kubelet[2610]: I0908 23:53:40.562592 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-685g8" podStartSLOduration=24.562568889 podStartE2EDuration="24.562568889s" podCreationTimestamp="2025-09-08 23:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:40.537825131 +0000 UTC m=+29.216035752" watchObservedRunningTime="2025-09-08 23:53:40.562568889 +0000 UTC m=+29.240779490" Sep 8 23:53:41.016551 systemd-networkd[1415]: veth6ba2fbe2: Gained IPv6LL Sep 8 23:53:41.237810 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:58680.service - OpenSSH per-connection server daemon (10.0.0.1:58680). Sep 8 23:53:41.294967 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 58680 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:41.298377 sshd-session[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:41.308957 systemd-logind[1495]: New session 7 of user core. Sep 8 23:53:41.317310 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:53:41.480145 sshd[3592]: Connection closed by 10.0.0.1 port 58680 Sep 8 23:53:41.480574 sshd-session[3590]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:41.487633 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:58680.service: Deactivated successfully. Sep 8 23:53:41.491788 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:53:41.494064 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:53:41.495845 systemd-logind[1495]: Removed session 7. Sep 8 23:53:46.504640 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:58686.service - OpenSSH per-connection server daemon (10.0.0.1:58686). Sep 8 23:53:46.547770 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 58686 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:46.549607 sshd-session[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:46.553990 systemd-logind[1495]: New session 8 of user core. Sep 8 23:53:46.562249 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:53:46.674938 sshd[3632]: Connection closed by 10.0.0.1 port 58686 Sep 8 23:53:46.675359 sshd-session[3630]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:46.679333 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:58686.service: Deactivated successfully. Sep 8 23:53:46.681660 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:53:46.682494 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:53:46.683496 systemd-logind[1495]: Removed session 8. Sep 8 23:53:51.692206 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Sep 8 23:53:51.736457 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:51.738396 sshd-session[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:51.743319 systemd-logind[1495]: New session 9 of user core. Sep 8 23:53:51.755295 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:53:51.870162 sshd[3671]: Connection closed by 10.0.0.1 port 58870 Sep 8 23:53:51.870552 sshd-session[3669]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:51.887674 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:58870.service: Deactivated successfully. Sep 8 23:53:51.890026 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:53:51.891851 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:53:51.898408 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:58874.service - OpenSSH per-connection server daemon (10.0.0.1:58874). Sep 8 23:53:51.899545 systemd-logind[1495]: Removed session 9. Sep 8 23:53:51.939520 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 58874 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:51.941577 sshd-session[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:51.947689 systemd-logind[1495]: New session 10 of user core. Sep 8 23:53:51.959294 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:53:52.255971 sshd[3687]: Connection closed by 10.0.0.1 port 58874 Sep 8 23:53:52.257064 sshd-session[3684]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:52.271398 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:58874.service: Deactivated successfully. Sep 8 23:53:52.273517 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:53:52.274260 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:53:52.283685 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:58880.service - OpenSSH per-connection server daemon (10.0.0.1:58880). Sep 8 23:53:52.285188 systemd-logind[1495]: Removed session 10. Sep 8 23:53:52.329533 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 58880 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:52.331573 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:52.340008 systemd-logind[1495]: New session 11 of user core. Sep 8 23:53:52.347423 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:53:52.475153 sshd[3709]: Connection closed by 10.0.0.1 port 58880 Sep 8 23:53:52.475588 sshd-session[3706]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:52.480284 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:58880.service: Deactivated successfully. Sep 8 23:53:52.482751 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:53:52.483536 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:53:52.484413 systemd-logind[1495]: Removed session 11. Sep 8 23:53:57.490152 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:58892.service - OpenSSH per-connection server daemon (10.0.0.1:58892). Sep 8 23:53:57.553685 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 58892 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:53:57.555880 sshd-session[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:57.560609 systemd-logind[1495]: New session 12 of user core. Sep 8 23:53:57.570259 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:53:57.689592 sshd[3745]: Connection closed by 10.0.0.1 port 58892 Sep 8 23:53:57.689943 sshd-session[3743]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:57.694275 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:58892.service: Deactivated successfully. Sep 8 23:53:57.696760 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:53:57.697604 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:53:57.699059 systemd-logind[1495]: Removed session 12. Sep 8 23:54:02.703654 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). Sep 8 23:54:02.748333 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:02.750402 sshd-session[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:02.755425 systemd-logind[1495]: New session 13 of user core. Sep 8 23:54:02.767325 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:54:02.880601 sshd[3782]: Connection closed by 10.0.0.1 port 58524 Sep 8 23:54:02.881045 sshd-session[3780]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:02.895415 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:58524.service: Deactivated successfully. Sep 8 23:54:02.897701 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:54:02.899469 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:54:02.912962 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:58536.service - OpenSSH per-connection server daemon (10.0.0.1:58536). Sep 8 23:54:02.914263 systemd-logind[1495]: Removed session 13. Sep 8 23:54:02.953232 sshd[3794]: Accepted publickey for core from 10.0.0.1 port 58536 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:02.954919 sshd-session[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:02.959860 systemd-logind[1495]: New session 14 of user core. Sep 8 23:54:02.969276 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:54:03.176882 sshd[3797]: Connection closed by 10.0.0.1 port 58536 Sep 8 23:54:03.177286 sshd-session[3794]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:03.189345 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:58536.service: Deactivated successfully. Sep 8 23:54:03.191512 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:54:03.193017 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:54:03.194442 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:58540.service - OpenSSH per-connection server daemon (10.0.0.1:58540). Sep 8 23:54:03.195342 systemd-logind[1495]: Removed session 14. Sep 8 23:54:03.239998 sshd[3808]: Accepted publickey for core from 10.0.0.1 port 58540 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:03.241661 sshd-session[3808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:03.246418 systemd-logind[1495]: New session 15 of user core. Sep 8 23:54:03.257282 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:54:04.731906 sshd[3811]: Connection closed by 10.0.0.1 port 58540 Sep 8 23:54:04.733272 sshd-session[3808]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:04.747385 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:58540.service: Deactivated successfully. Sep 8 23:54:04.749964 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:54:04.750852 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:54:04.762471 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:58556.service - OpenSSH per-connection server daemon (10.0.0.1:58556). Sep 8 23:54:04.763328 systemd-logind[1495]: Removed session 15. Sep 8 23:54:04.808467 sshd[3846]: Accepted publickey for core from 10.0.0.1 port 58556 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:04.810256 sshd-session[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:04.815147 systemd-logind[1495]: New session 16 of user core. Sep 8 23:54:04.827299 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:54:05.073797 sshd[3849]: Connection closed by 10.0.0.1 port 58556 Sep 8 23:54:05.074297 sshd-session[3846]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:05.088432 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:58556.service: Deactivated successfully. Sep 8 23:54:05.091028 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:54:05.093046 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:54:05.104389 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:58564.service - OpenSSH per-connection server daemon (10.0.0.1:58564). Sep 8 23:54:05.105768 systemd-logind[1495]: Removed session 16. Sep 8 23:54:05.146694 sshd[3860]: Accepted publickey for core from 10.0.0.1 port 58564 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:05.148538 sshd-session[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:05.153367 systemd-logind[1495]: New session 17 of user core. Sep 8 23:54:05.163369 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:54:05.277641 sshd[3863]: Connection closed by 10.0.0.1 port 58564 Sep 8 23:54:05.278213 sshd-session[3860]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:05.283430 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:58564.service: Deactivated successfully. Sep 8 23:54:05.286075 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:54:05.286988 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:54:05.288457 systemd-logind[1495]: Removed session 17. Sep 8 23:54:10.292444 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:42716.service - OpenSSH per-connection server daemon (10.0.0.1:42716). Sep 8 23:54:10.339714 sshd[3898]: Accepted publickey for core from 10.0.0.1 port 42716 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:10.341749 sshd-session[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:10.347142 systemd-logind[1495]: New session 18 of user core. Sep 8 23:54:10.360390 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:54:10.483042 sshd[3900]: Connection closed by 10.0.0.1 port 42716 Sep 8 23:54:10.483494 sshd-session[3898]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:10.488463 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:42716.service: Deactivated successfully. Sep 8 23:54:10.491263 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:54:10.492163 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:54:10.493494 systemd-logind[1495]: Removed session 18. Sep 8 23:54:15.529005 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:42728.service - OpenSSH per-connection server daemon (10.0.0.1:42728). Sep 8 23:54:15.602328 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 42728 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:15.600950 sshd-session[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:15.633935 systemd-logind[1495]: New session 19 of user core. Sep 8 23:54:15.640871 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:54:15.902701 sshd[3941]: Connection closed by 10.0.0.1 port 42728 Sep 8 23:54:15.904213 sshd-session[3939]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:15.910840 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:42728.service: Deactivated successfully. Sep 8 23:54:15.916245 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:54:15.918388 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:54:15.920152 systemd-logind[1495]: Removed session 19. Sep 8 23:54:20.917367 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:60984.service - OpenSSH per-connection server daemon (10.0.0.1:60984). Sep 8 23:54:20.970366 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 60984 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:20.972599 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:20.977475 systemd-logind[1495]: New session 20 of user core. Sep 8 23:54:20.987280 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:54:21.112005 sshd[3985]: Connection closed by 10.0.0.1 port 60984 Sep 8 23:54:21.112511 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:21.116753 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:60984.service: Deactivated successfully. Sep 8 23:54:21.119143 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:54:21.120155 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:54:21.121249 systemd-logind[1495]: Removed session 20. Sep 8 23:54:26.130142 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:60994.service - OpenSSH per-connection server daemon (10.0.0.1:60994). Sep 8 23:54:26.174885 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 60994 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:26.176754 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:26.181649 systemd-logind[1495]: New session 21 of user core. Sep 8 23:54:26.191236 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:54:26.297268 sshd[4036]: Connection closed by 10.0.0.1 port 60994 Sep 8 23:54:26.297633 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:26.302300 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:60994.service: Deactivated successfully. Sep 8 23:54:26.304730 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:54:26.305508 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:54:26.306459 systemd-logind[1495]: Removed session 21. Sep 8 23:54:27.035735 update_engine[1496]: I20250908 23:54:27.035638 1496 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 8 23:54:27.035735 update_engine[1496]: I20250908 23:54:27.035707 1496 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 8 23:54:27.036370 update_engine[1496]: I20250908 23:54:27.036036 1496 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 8 23:54:27.036729 update_engine[1496]: I20250908 23:54:27.036688 1496 omaha_request_params.cc:62] Current group set to stable Sep 8 23:54:27.037300 update_engine[1496]: I20250908 23:54:27.037257 1496 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 8 23:54:27.037300 update_engine[1496]: I20250908 23:54:27.037281 1496 update_attempter.cc:643] Scheduling an action processor start. Sep 8 23:54:27.037389 update_engine[1496]: I20250908 23:54:27.037300 1496 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 8 23:54:27.037389 update_engine[1496]: I20250908 23:54:27.037361 1496 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 8 23:54:27.037464 update_engine[1496]: I20250908 23:54:27.037451 1496 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 8 23:54:27.037499 update_engine[1496]: I20250908 23:54:27.037463 1496 omaha_request_action.cc:272] Request: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: Sep 8 23:54:27.037499 update_engine[1496]: I20250908 23:54:27.037474 1496 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 8 23:54:27.037839 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 8 23:54:27.039823 update_engine[1496]: I20250908 23:54:27.039765 1496 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 8 23:54:27.040272 update_engine[1496]: I20250908 23:54:27.040212 1496 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 8 23:54:27.047825 update_engine[1496]: E20250908 23:54:27.047775 1496 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 8 23:54:27.047903 update_engine[1496]: I20250908 23:54:27.047853 1496 libcurl_http_fetcher.cc:283] No HTTP response, retry 1