Jan 23 19:25:24.573929 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:25:24.573963 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:25:24.573975 kernel: BIOS-provided physical RAM map: Jan 23 19:25:24.573989 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:25:24.573998 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 19:25:24.574008 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 19:25:24.574019 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 19:25:24.574027 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 19:25:24.574144 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 19:25:24.574155 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 19:25:24.574164 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 19:25:24.574172 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 19:25:24.574185 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 19:25:24.574195 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 19:25:24.574206 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 19:25:24.574214 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 19:25:24.574327 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 19:25:24.574345 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 19:25:24.574356 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 19:25:24.574584 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 19:25:24.574598 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 19:25:24.574609 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 19:25:24.574621 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 19:25:24.574630 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:25:24.574639 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 19:25:24.574647 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 19:25:24.574656 kernel: NX (Execute Disable) protection: active Jan 23 19:25:24.574667 kernel: APIC: Static calls initialized Jan 23 19:25:24.574685 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 19:25:24.574694 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 19:25:24.574704 kernel: extended physical RAM map: Jan 23 19:25:24.574715 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:25:24.574725 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 19:25:24.574734 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 19:25:24.574742 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 19:25:24.574751 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 19:25:24.574763 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 19:25:24.574771 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 19:25:24.574780 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 19:25:24.574793 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 19:25:24.574811 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 19:25:24.574821 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 19:25:24.574830 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 19:25:24.574839 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 19:25:24.574856 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 19:25:24.574866 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 19:25:24.574875 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 19:25:24.574884 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 19:25:24.574895 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 19:25:24.574905 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 19:25:24.574915 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 19:25:24.574924 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 19:25:24.574935 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 19:25:24.574944 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 19:25:24.574955 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 19:25:24.574970 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:25:24.574981 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 19:25:24.574990 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 19:25:24.575100 kernel: efi: EFI v2.7 by EDK II Jan 23 19:25:24.575113 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 19:25:24.575216 kernel: random: crng init done Jan 23 19:25:24.575229 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 19:25:24.575328 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 19:25:24.575340 kernel: secureboot: Secure boot disabled Jan 23 19:25:24.575351 kernel: SMBIOS 2.8 present. Jan 23 19:25:24.575361 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 19:25:24.575610 kernel: DMI: Memory slots populated: 1/1 Jan 23 19:25:24.575619 kernel: Hypervisor detected: KVM Jan 23 19:25:24.575628 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 19:25:24.575637 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 19:25:24.575647 kernel: kvm-clock: using sched offset of 31896123429 cycles Jan 23 19:25:24.575660 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 19:25:24.575670 kernel: tsc: Detected 2445.426 MHz processor Jan 23 19:25:24.575680 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:25:24.575689 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:25:24.575701 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 19:25:24.575712 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 19:25:24.575726 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:25:24.575735 kernel: Using GB pages for direct mapping Jan 23 19:25:24.575747 kernel: ACPI: Early table checksum verification disabled Jan 23 19:25:24.575758 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 19:25:24.575767 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 19:25:24.575777 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:25:24.575788 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:25:24.575798 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 19:25:24.575812 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:25:24.575823 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:25:24.575833 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:25:24.575843 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:25:24.575854 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 19:25:24.575866 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 19:25:24.575878 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 19:25:24.575889 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 19:25:24.575901 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 19:25:24.575918 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 19:25:24.575930 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 19:25:24.575942 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 19:25:24.575954 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 19:25:24.575964 kernel: No NUMA configuration found Jan 23 19:25:24.575973 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 19:25:24.575984 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 19:25:24.575997 kernel: Zone ranges: Jan 23 19:25:24.576008 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:25:24.576022 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 19:25:24.576034 kernel: Normal empty Jan 23 19:25:24.576044 kernel: Device empty Jan 23 19:25:24.576054 kernel: Movable zone start for each node Jan 23 19:25:24.576063 kernel: Early memory node ranges Jan 23 19:25:24.576072 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 19:25:24.576186 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 19:25:24.576198 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 19:25:24.576208 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 19:25:24.576221 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 19:25:24.576233 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 19:25:24.576243 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 19:25:24.576252 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 19:25:24.576262 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 19:25:24.576586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:25:24.576612 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 19:25:24.576629 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 19:25:24.576639 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:25:24.576649 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 19:25:24.576659 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 19:25:24.576670 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 19:25:24.576686 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 19:25:24.576696 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 19:25:24.576706 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 19:25:24.576717 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 19:25:24.576730 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 19:25:24.576743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 19:25:24.576753 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 19:25:24.576766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:25:24.576776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 19:25:24.576786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 19:25:24.576797 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:25:24.576807 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 19:25:24.576817 kernel: TSC deadline timer available Jan 23 19:25:24.576829 kernel: CPU topo: Max. logical packages: 1 Jan 23 19:25:24.576845 kernel: CPU topo: Max. logical dies: 1 Jan 23 19:25:24.576857 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:25:24.576869 kernel: CPU topo: Max. threads per core: 1 Jan 23 19:25:24.576881 kernel: CPU topo: Num. cores per package: 4 Jan 23 19:25:24.576894 kernel: CPU topo: Num. threads per package: 4 Jan 23 19:25:24.576906 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 19:25:24.576919 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 19:25:24.576930 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 19:25:24.576940 kernel: kvm-guest: setup PV sched yield Jan 23 19:25:24.576957 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 19:25:24.576970 kernel: Booting paravirtualized kernel on KVM Jan 23 19:25:24.576981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:25:24.576991 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 19:25:24.577004 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 19:25:24.577014 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 19:25:24.577025 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 19:25:24.577034 kernel: kvm-guest: PV spinlocks enabled Jan 23 19:25:24.577045 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:25:24.577171 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:25:24.577183 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 19:25:24.577194 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 19:25:24.577206 kernel: Fallback order for Node 0: 0 Jan 23 19:25:24.577216 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 19:25:24.577226 kernel: Policy zone: DMA32 Jan 23 19:25:24.577237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:25:24.577247 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 19:25:24.577263 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:25:24.577275 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:25:24.577287 kernel: Dynamic Preempt: voluntary Jan 23 19:25:24.577297 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:25:24.577316 kernel: rcu: RCU event tracing is enabled. Jan 23 19:25:24.577328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 19:25:24.577340 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:25:24.577352 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:25:24.577584 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:25:24.577598 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:25:24.577615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 19:25:24.577730 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:25:24.577743 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:25:24.577753 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:25:24.577762 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 19:25:24.577773 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:25:24.577784 kernel: Console: colour dummy device 80x25 Jan 23 19:25:24.577794 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:25:24.577804 kernel: ACPI: Core revision 20240827 Jan 23 19:25:24.577821 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 19:25:24.577832 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:25:24.577842 kernel: x2apic enabled Jan 23 19:25:24.577852 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:25:24.577864 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 19:25:24.577876 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 19:25:24.577886 kernel: kvm-guest: setup PV IPIs Jan 23 19:25:24.577896 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 19:25:24.577908 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:25:24.577924 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 19:25:24.577934 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 19:25:24.577944 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 19:25:24.577956 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 19:25:24.577967 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:25:24.577977 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:25:24.577989 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:25:24.577998 kernel: Speculative Store Bypass: Vulnerable Jan 23 19:25:24.578014 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 19:25:24.578027 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 19:25:24.578150 kernel: active return thunk: srso_alias_return_thunk Jan 23 19:25:24.578167 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 19:25:24.578177 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 19:25:24.578187 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 19:25:24.578200 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:25:24.578211 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:25:24.578221 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:25:24.578236 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:25:24.578249 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 19:25:24.578259 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:25:24.578269 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:25:24.578278 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:25:24.578289 kernel: landlock: Up and running. Jan 23 19:25:24.578301 kernel: SELinux: Initializing. Jan 23 19:25:24.578311 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:25:24.578321 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:25:24.578339 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 19:25:24.578350 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 19:25:24.578360 kernel: signal: max sigframe size: 1776 Jan 23 19:25:24.578595 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:25:24.578606 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:25:24.578616 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 19:25:24.578628 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:25:24.578641 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:25:24.578651 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:25:24.578668 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 19:25:24.578680 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 19:25:24.578689 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 19:25:24.578700 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Jan 23 19:25:24.578710 kernel: devtmpfs: initialized Jan 23 19:25:24.578722 kernel: x86/mm: Memory block size: 128MB Jan 23 19:25:24.578732 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 19:25:24.578742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 19:25:24.578759 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 19:25:24.578770 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 19:25:24.578780 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 19:25:24.578790 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 19:25:24.578802 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:25:24.578815 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 19:25:24.578824 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:25:24.578834 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:25:24.578846 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:25:24.578861 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:25:24.578871 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:25:24.578881 kernel: audit: type=2000 audit(1769196304.396:1): state=initialized audit_enabled=0 res=1 Jan 23 19:25:24.578893 kernel: cpuidle: using governor menu Jan 23 19:25:24.578903 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:25:24.578913 kernel: dca service started, version 1.12.1 Jan 23 19:25:24.578924 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 19:25:24.578934 kernel: PCI: Using configuration type 1 for base access Jan 23 19:25:24.578946 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:25:24.578962 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:25:24.578974 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:25:24.578987 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:25:24.578999 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:25:24.579011 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:25:24.579023 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:25:24.579035 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:25:24.579047 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 19:25:24.579057 kernel: ACPI: Interpreter enabled Jan 23 19:25:24.579074 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 19:25:24.579087 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:25:24.579097 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:25:24.579108 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 19:25:24.579120 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 19:25:24.579131 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 19:25:24.579853 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 19:25:24.580068 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 19:25:24.580295 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 19:25:24.580313 kernel: PCI host bridge to bus 0000:00 Jan 23 19:25:24.580747 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 19:25:24.580937 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 19:25:24.581133 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 19:25:24.581322 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 19:25:24.581744 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 19:25:24.581940 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 19:25:24.582242 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 19:25:24.582692 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 19:25:24.582904 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 19:25:24.583107 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 19:25:24.583659 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 19:25:24.585358 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 19:25:24.586327 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 19:25:24.587139 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 15625 usecs Jan 23 19:25:24.587357 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 19:25:24.587900 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 19:25:24.589894 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 19:25:24.590217 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 19:25:24.590850 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 19:25:24.591053 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 19:25:24.591253 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 19:25:24.591689 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 19:25:24.591909 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 19:25:24.592113 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 19:25:24.592334 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 19:25:24.592777 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 19:25:24.593062 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 19:25:24.593274 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 19:25:24.595147 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 19:25:24.595349 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 16601 usecs Jan 23 19:25:24.595882 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 19:25:24.596108 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 19:25:24.596307 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 19:25:24.596744 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 19:25:24.596957 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 19:25:24.596976 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 19:25:24.596990 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 19:25:24.597003 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 19:25:24.597014 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 19:25:24.597034 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 19:25:24.597046 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 19:25:24.597056 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 19:25:24.597065 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 19:25:24.597076 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 19:25:24.597088 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 19:25:24.597097 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 19:25:24.597107 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 19:25:24.597118 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 19:25:24.597136 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 19:25:24.597146 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 19:25:24.597156 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 19:25:24.597166 kernel: iommu: Default domain type: Translated Jan 23 19:25:24.597179 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:25:24.597190 kernel: efivars: Registered efivars operations Jan 23 19:25:24.597200 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:25:24.597209 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 19:25:24.597222 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 19:25:24.597237 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 19:25:24.597247 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 19:25:24.597259 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 19:25:24.597269 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 19:25:24.597281 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 19:25:24.597293 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 19:25:24.597305 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 19:25:24.597730 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 19:25:24.597937 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 19:25:24.598148 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 19:25:24.598168 kernel: vgaarb: loaded Jan 23 19:25:24.598179 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 19:25:24.598189 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 19:25:24.598199 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 19:25:24.598210 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:25:24.598223 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:25:24.598233 kernel: pnp: PnP ACPI init Jan 23 19:25:24.598768 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 19:25:24.598789 kernel: pnp: PnP ACPI: found 6 devices Jan 23 19:25:24.598803 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:25:24.598813 kernel: NET: Registered PF_INET protocol family Jan 23 19:25:24.598824 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 19:25:24.598834 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 19:25:24.598872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:25:24.598888 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 19:25:24.598903 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 19:25:24.598915 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 19:25:24.598928 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:25:24.598941 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:25:24.598954 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:25:24.598966 kernel: NET: Registered PF_XDP protocol family Jan 23 19:25:24.599175 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 19:25:24.600132 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 19:25:24.600342 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 19:25:24.600854 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 19:25:24.601044 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 19:25:24.602763 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 19:25:24.602952 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 19:25:24.603144 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 19:25:24.603164 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:25:24.603176 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:25:24.603189 kernel: Initialise system trusted keyrings Jan 23 19:25:24.603209 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 19:25:24.603220 kernel: Key type asymmetric registered Jan 23 19:25:24.603233 kernel: Asymmetric key parser 'x509' registered Jan 23 19:25:24.603245 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:25:24.603255 kernel: io scheduler mq-deadline registered Jan 23 19:25:24.603266 kernel: io scheduler kyber registered Jan 23 19:25:24.603277 kernel: io scheduler bfq registered Jan 23 19:25:24.603289 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:25:24.603301 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 19:25:24.603316 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 19:25:24.603330 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 19:25:24.603341 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:25:24.603351 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:25:24.603670 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 19:25:24.603697 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 19:25:24.603714 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 19:25:24.603923 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 19:25:24.603945 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 19:25:24.604141 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 19:25:24.604672 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T19:25:20 UTC (1769196320) Jan 23 19:25:24.604874 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 19:25:24.604891 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 19:25:24.604909 kernel: efifb: probing for efifb Jan 23 19:25:24.604923 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 19:25:24.604934 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 19:25:24.604945 kernel: efifb: scrolling: redraw Jan 23 19:25:24.604955 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 19:25:24.604968 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 19:25:24.604980 kernel: fb0: EFI VGA frame buffer device Jan 23 19:25:24.604990 kernel: pstore: Using crash dump compression: deflate Jan 23 19:25:24.605000 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 19:25:24.605018 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:25:24.605029 kernel: Segment Routing with IPv6 Jan 23 19:25:24.605039 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:25:24.605051 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:25:24.605062 kernel: Key type dns_resolver registered Jan 23 19:25:24.605074 kernel: IPI shorthand broadcast: enabled Jan 23 19:25:24.605087 kernel: sched_clock: Marking stable (16431079321, 2649092405)->(20686555387, -1606383661) Jan 23 19:25:24.605100 kernel: registered taskstats version 1 Jan 23 19:25:24.605114 kernel: Loading compiled-in X.509 certificates Jan 23 19:25:24.605127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:25:24.605145 kernel: Demotion targets for Node 0: null Jan 23 19:25:24.605158 kernel: Key type .fscrypt registered Jan 23 19:25:24.605171 kernel: Key type fscrypt-provisioning registered Jan 23 19:25:24.605183 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:25:24.605194 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:25:24.605206 kernel: ima: No architecture policies found Jan 23 19:25:24.605219 kernel: clk: Disabling unused clocks Jan 23 19:25:24.605230 kernel: Warning: unable to open an initial console. Jan 23 19:25:24.605248 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:25:24.605261 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:25:24.605271 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:25:24.605282 kernel: Run /init as init process Jan 23 19:25:24.605292 kernel: with arguments: Jan 23 19:25:24.605305 kernel: /init Jan 23 19:25:24.605315 kernel: with environment: Jan 23 19:25:24.605324 kernel: HOME=/ Jan 23 19:25:24.605336 kernel: TERM=linux Jan 23 19:25:24.605354 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:25:24.605594 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:25:24.605610 systemd[1]: Detected virtualization kvm. Jan 23 19:25:24.605623 systemd[1]: Detected architecture x86-64. Jan 23 19:25:24.605635 systemd[1]: Running in initrd. Jan 23 19:25:24.605646 systemd[1]: No hostname configured, using default hostname. Jan 23 19:25:24.605657 systemd[1]: Hostname set to . Jan 23 19:25:24.605674 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:25:24.605685 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:25:24.605698 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:25:24.605711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:25:24.605725 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:25:24.605737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:25:24.606094 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:25:24.606117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:25:24.606135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:25:24.606148 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:25:24.606159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:25:24.606169 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:25:24.606183 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:25:24.606195 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:25:24.606205 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:25:24.606222 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:25:24.606235 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:25:24.606248 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:25:24.606261 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:25:24.606275 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:25:24.606288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:25:24.606302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:25:24.606316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:25:24.606327 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:25:24.606346 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:25:24.606360 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:25:24.606602 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:25:24.606615 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:25:24.606627 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:25:24.606639 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:25:24.606652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:25:24.606666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:25:24.606685 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:25:24.606697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:25:24.606711 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:25:24.606723 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:25:24.606881 systemd-journald[202]: Collecting audit messages is disabled. Jan 23 19:25:24.606919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:25:24.606932 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:25:24.606944 systemd-journald[202]: Journal started Jan 23 19:25:24.606973 systemd-journald[202]: Runtime Journal (/run/log/journal/7b2158579a214969886820ec1a275555) is 6M, max 48.1M, 42.1M free. Jan 23 19:25:24.539589 systemd-modules-load[203]: Inserted module 'overlay' Jan 23 19:25:24.663110 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:25:24.686008 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:25:24.693719 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:25:24.718328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:25:24.773032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:25:24.785572 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:25:24.861073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:25:24.908125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:25:24.956316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:25:25.066115 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:25:25.100557 kernel: Bridge firewalling registered Jan 23 19:25:25.104082 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 23 19:25:25.107264 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:25:25.122270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:25:25.201006 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:25:25.295853 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:25:25.318202 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:25:25.455893 systemd-resolved[272]: Positive Trust Anchors: Jan 23 19:25:25.456008 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:25:25.456034 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:25:25.462936 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 23 19:25:25.473235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:25:25.598014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:25:26.135172 kernel: SCSI subsystem initialized Jan 23 19:25:26.186151 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:25:26.288753 kernel: iscsi: registered transport (tcp) Jan 23 19:25:26.462624 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:25:26.463166 kernel: QLogic iSCSI HBA Driver Jan 23 19:25:26.668163 kernel: hrtimer: interrupt took 5786942 ns Jan 23 19:25:26.781944 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:25:26.949870 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:25:27.005169 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:25:27.470930 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:25:27.509994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:25:27.877658 kernel: raid6: avx2x4 gen() 12020 MB/s Jan 23 19:25:27.903032 kernel: raid6: avx2x2 gen() 11510 MB/s Jan 23 19:25:27.936289 kernel: raid6: avx2x1 gen() 6191 MB/s Jan 23 19:25:27.936792 kernel: raid6: using algorithm avx2x4 gen() 12020 MB/s Jan 23 19:25:27.971257 kernel: raid6: .... xor() 2752 MB/s, rmw enabled Jan 23 19:25:27.971342 kernel: raid6: using avx2x2 recovery algorithm Jan 23 19:25:28.062022 kernel: xor: automatically using best checksumming function avx Jan 23 19:25:29.737857 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:25:29.840919 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:25:29.880850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:25:30.028134 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jan 23 19:25:30.045216 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:25:30.054993 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:25:30.316986 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Jan 23 19:25:30.575160 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:25:30.631900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:25:30.944127 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:25:30.974357 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:25:31.207048 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 19:25:31.272804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:25:31.318146 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:25:31.318314 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 19:25:31.273317 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:25:31.449109 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 19:25:31.449143 kernel: GPT:9289727 != 19775487 Jan 23 19:25:31.449160 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 19:25:31.449175 kernel: GPT:9289727 != 19775487 Jan 23 19:25:31.449188 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 19:25:31.449203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:25:31.449109 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:25:31.508096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:25:31.519232 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:25:31.617272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:25:31.618275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:25:31.681266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:25:31.894077 kernel: libata version 3.00 loaded. Jan 23 19:25:31.968339 kernel: AES CTR mode by8 optimization enabled Jan 23 19:25:32.052330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:25:32.150242 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 19:25:32.163784 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 19:25:32.178661 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 19:25:32.288783 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 19:25:32.289354 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 19:25:32.291334 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 19:25:32.307344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 19:25:32.367821 kernel: scsi host0: ahci Jan 23 19:25:32.382950 kernel: scsi host1: ahci Jan 23 19:25:32.384293 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 19:25:32.421745 kernel: scsi host2: ahci Jan 23 19:25:32.422108 kernel: scsi host3: ahci Jan 23 19:25:32.462804 kernel: scsi host4: ahci Jan 23 19:25:32.463125 kernel: scsi host5: ahci Jan 23 19:25:32.471022 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 19:25:32.552306 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 23 19:25:32.552355 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 23 19:25:32.552905 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 23 19:25:32.552928 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 23 19:25:32.552943 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 23 19:25:32.552957 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 23 19:25:32.615806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 19:25:32.697327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:25:32.702267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:25:32.820006 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 19:25:32.836898 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 19:25:32.848014 disk-uuid[617]: Primary Header is updated. Jan 23 19:25:32.848014 disk-uuid[617]: Secondary Entries is updated. Jan 23 19:25:32.848014 disk-uuid[617]: Secondary Header is updated. Jan 23 19:25:32.995165 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 19:25:32.995206 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 19:25:32.995222 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:25:32.995236 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 19:25:32.995263 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 19:25:32.995275 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:25:32.995285 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 19:25:32.995295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:25:32.995305 kernel: ata3.00: applying bridge limits Jan 23 19:25:33.022888 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:25:33.022957 kernel: ata3.00: configured for UDMA/100 Jan 23 19:25:33.059088 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 19:25:33.303294 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 19:25:33.305360 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 19:25:33.344359 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 19:25:33.942061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:25:33.954064 disk-uuid[618]: The operation has completed successfully. Jan 23 19:25:34.105048 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:25:34.105866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:25:34.171322 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:25:34.332852 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:25:34.333772 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:25:34.413343 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:25:34.456838 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:25:34.515949 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:25:34.573325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:25:34.612900 sh[646]: Success Jan 23 19:25:34.807981 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:25:34.808215 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:25:34.865836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:25:35.082810 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 19:25:35.369125 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:25:35.388935 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:25:35.463157 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:25:35.575066 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (658) Jan 23 19:25:35.575108 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:25:35.575124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:25:35.723106 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:25:35.723218 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:25:35.753280 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:25:35.787020 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:25:35.823086 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:25:35.858013 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:25:35.894245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:25:36.102855 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (689) Jan 23 19:25:36.149062 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:25:36.149155 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:25:36.205994 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:25:36.206081 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:25:36.269963 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:25:36.297187 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:25:36.326073 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:25:36.814736 ignition[749]: Ignition 2.22.0 Jan 23 19:25:36.814761 ignition[749]: Stage: fetch-offline Jan 23 19:25:36.818618 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:25:36.834746 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:25:36.884266 ignition[749]: parsed url from cmdline: "" Jan 23 19:25:36.884275 ignition[749]: no config URL provided Jan 23 19:25:36.884287 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:25:36.884309 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:25:36.884349 ignition[749]: op(1): [started] loading QEMU firmware config module Jan 23 19:25:36.884994 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 19:25:36.940665 ignition[749]: op(1): [finished] loading QEMU firmware config module Jan 23 19:25:36.940911 ignition[749]: QEMU firmware config was not found. Ignoring... Jan 23 19:25:37.102302 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:25:37.148101 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:25:37.343347 systemd-networkd[835]: lo: Link UP Jan 23 19:25:37.343814 systemd-networkd[835]: lo: Gained carrier Jan 23 19:25:37.380871 systemd-networkd[835]: Enumeration completed Jan 23 19:25:37.383987 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:25:37.461260 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:25:37.461339 systemd[1]: Reached target network.target - Network. Jan 23 19:25:37.558270 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:25:37.609114 systemd-networkd[835]: eth0: Link UP Jan 23 19:25:37.570102 ignition[749]: parsing config with SHA512: e1d94d17c5c2e48a29a36125cf5ef12bbdec1d856cb0d93211001105a09f6bea68fe3e1d779008693efbeb482a6ec7d16f684c6eca4ef746dcf05434279f6cbb Jan 23 19:25:37.611271 systemd-networkd[835]: eth0: Gained carrier Jan 23 19:25:37.611296 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:25:37.746960 unknown[749]: fetched base config from "system" Jan 23 19:25:37.747037 unknown[749]: fetched user config from "qemu" Jan 23 19:25:37.748075 ignition[749]: fetch-offline: fetch-offline passed Jan 23 19:25:37.759846 systemd-networkd[835]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:25:37.748154 ignition[749]: Ignition finished successfully Jan 23 19:25:37.759878 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:25:37.785184 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 19:25:37.788738 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:25:37.824092 systemd-resolved[272]: Detected conflict on linux IN A 10.0.0.124 Jan 23 19:25:37.824103 systemd-resolved[272]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jan 23 19:25:38.086185 ignition[840]: Ignition 2.22.0 Jan 23 19:25:38.088663 ignition[840]: Stage: kargs Jan 23 19:25:38.088856 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:25:38.088872 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:25:38.100663 ignition[840]: kargs: kargs passed Jan 23 19:25:38.179223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:25:38.100756 ignition[840]: Ignition finished successfully Jan 23 19:25:38.248233 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:25:38.505803 ignition[848]: Ignition 2.22.0 Jan 23 19:25:38.505938 ignition[848]: Stage: disks Jan 23 19:25:38.506128 ignition[848]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:25:38.506143 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:25:38.523015 ignition[848]: disks: disks passed Jan 23 19:25:38.523097 ignition[848]: Ignition finished successfully Jan 23 19:25:38.618185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:25:38.683942 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:25:38.733310 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:25:38.758791 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:25:38.813984 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:25:38.850226 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:25:38.883775 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:25:39.071855 systemd-fsck[857]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 19:25:39.104835 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:25:39.139006 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:25:39.468923 systemd-networkd[835]: eth0: Gained IPv6LL Jan 23 19:25:40.609922 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:25:40.614023 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:25:40.657075 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:25:40.680075 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:25:40.737268 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:25:40.835313 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Jan 23 19:25:40.781774 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 19:25:40.921762 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:25:40.921804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:25:40.781850 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:25:40.781889 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:25:40.844249 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:25:40.939234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:25:41.097239 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:25:41.097278 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:25:41.108779 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:25:41.357261 initrd-setup-root[889]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:25:41.418302 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:25:41.473239 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:25:41.522310 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:25:42.521358 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:25:42.561185 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:25:42.565212 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:25:42.733255 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:25:42.762111 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:25:42.881289 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:25:43.981054 ignition[979]: INFO : Ignition 2.22.0 Jan 23 19:25:43.981054 ignition[979]: INFO : Stage: mount Jan 23 19:25:43.981054 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:25:43.981054 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:25:44.064187 ignition[979]: INFO : mount: mount passed Jan 23 19:25:44.083914 ignition[979]: INFO : Ignition finished successfully Jan 23 19:25:44.153896 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:25:44.211157 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:25:44.358722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:25:44.623308 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (991) Jan 23 19:25:44.654725 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:25:44.679202 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:25:44.878868 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:25:44.879272 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:25:44.906288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:25:45.249241 ignition[1008]: INFO : Ignition 2.22.0 Jan 23 19:25:45.249241 ignition[1008]: INFO : Stage: files Jan 23 19:25:45.278196 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:25:45.278196 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:25:45.278196 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:25:45.278196 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:25:45.278196 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:25:45.415904 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:25:45.415904 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:25:45.415904 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:25:45.415904 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:25:45.415904 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 19:25:45.327782 unknown[1008]: wrote ssh authorized keys file for user: core Jan 23 19:25:45.670072 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 19:25:48.062288 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:25:48.062288 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:25:48.169124 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:25:48.169124 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:25:48.169124 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:25:48.169124 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:25:48.169124 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:25:48.169124 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:25:48.352997 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 19:25:48.721047 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 19:25:59.529928 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1060123722 wd_nsec: 1060123219 Jan 23 19:26:00.104151 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:26:00.153312 ignition[1008]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 19:26:00.153312 ignition[1008]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 19:26:00.210685 ignition[1008]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 19:26:01.050878 ignition[1008]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:26:01.086095 ignition[1008]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:26:01.086095 ignition[1008]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 19:26:01.086095 ignition[1008]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 19:26:01.086095 ignition[1008]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 19:26:01.232121 ignition[1008]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:26:01.232121 ignition[1008]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:26:01.232121 ignition[1008]: INFO : files: files passed Jan 23 19:26:01.232121 ignition[1008]: INFO : Ignition finished successfully Jan 23 19:26:01.207228 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:26:01.371869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:26:01.400945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:26:01.550922 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:26:01.551221 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:26:01.612657 initrd-setup-root-after-ignition[1036]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 19:26:01.632051 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:26:01.632051 initrd-setup-root-after-ignition[1038]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:26:01.695718 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:26:01.701287 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:26:01.763851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:26:01.866350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:26:02.119672 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:26:02.120245 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:26:02.173268 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:26:02.220020 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:26:02.240148 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:26:02.250641 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:26:02.465968 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:26:02.484921 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:26:02.623321 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:26:02.645684 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:26:02.690135 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:26:02.706093 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:26:02.706316 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:26:02.877605 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:26:02.910357 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:26:02.939987 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:26:03.016115 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:26:03.041217 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:26:03.177218 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:26:03.210706 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:26:03.268057 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:26:03.296219 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:26:03.364000 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:26:03.396077 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:26:03.408304 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:26:03.409116 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:26:03.519037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:26:03.543226 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:26:03.595290 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:26:03.599082 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:26:03.667633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:26:03.669344 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:26:03.747195 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:26:03.748000 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:26:03.838149 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:26:03.863957 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:26:03.900925 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:26:03.977018 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:26:04.015275 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:26:04.058969 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:26:04.059356 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:26:04.106159 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:26:04.106312 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:26:04.131095 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:26:04.131292 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:26:04.157235 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:26:04.157656 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:26:04.295301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:26:04.323140 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:26:04.323668 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:26:04.409141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:26:04.429742 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:26:04.430290 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:26:04.431246 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:26:04.431670 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:26:04.560975 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:26:04.561278 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:26:04.663071 ignition[1063]: INFO : Ignition 2.22.0 Jan 23 19:26:04.663071 ignition[1063]: INFO : Stage: umount Jan 23 19:26:04.663071 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:04.663071 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:04.663071 ignition[1063]: INFO : umount: umount passed Jan 23 19:26:04.663071 ignition[1063]: INFO : Ignition finished successfully Jan 23 19:26:04.606677 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:26:04.637204 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:26:04.639334 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:26:04.662165 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:26:04.662735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:26:04.698323 systemd[1]: Stopped target network.target - Network. Jan 23 19:26:04.716324 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:26:04.716672 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:26:04.757747 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:26:04.757998 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:26:04.758117 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:26:04.758328 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:26:04.871900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:26:04.872334 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:26:04.945172 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:26:04.945299 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:26:05.003350 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:26:05.033265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:26:05.325280 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:26:05.325707 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:26:05.471707 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:26:05.472357 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:26:05.473049 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:26:05.574238 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:26:05.614199 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:26:05.640239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:26:05.640350 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:26:05.653619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:26:05.713905 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:26:05.714020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:26:05.714157 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:26:05.714221 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:05.798125 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:26:05.798225 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:26:05.811683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:26:05.811915 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:26:05.925278 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:26:05.959337 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:26:05.960027 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:26:06.039641 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:26:06.045188 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:26:06.178978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:26:06.179132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:26:06.240744 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:26:06.241209 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:26:06.319920 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:26:06.320044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:26:06.355721 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:26:06.356039 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:26:06.406945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:26:06.407064 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:26:06.435910 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:26:06.524205 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:26:06.524355 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:26:06.617188 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:26:06.617290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:26:06.842213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:26:06.843224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:06.899076 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 19:26:06.899172 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 19:26:06.899245 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:26:06.904180 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:26:06.904763 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:26:06.914347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:26:06.915144 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:26:06.957234 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:26:07.003675 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:26:07.176023 systemd[1]: Switching root. Jan 23 19:26:07.368941 systemd-journald[202]: Journal stopped Jan 23 19:26:14.246011 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 23 19:26:14.246116 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:26:14.246145 kernel: SELinux: policy capability open_perms=1 Jan 23 19:26:14.246163 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:26:14.246186 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:26:14.246203 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:26:14.246219 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:26:14.246235 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:26:14.246252 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:26:14.246360 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:26:14.246533 kernel: audit: type=1403 audit(1769196368.379:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:26:14.246553 systemd[1]: Successfully loaded SELinux policy in 393.853ms. Jan 23 19:26:14.246581 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 66.043ms. Jan 23 19:26:14.246603 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:26:14.246624 systemd[1]: Detected virtualization kvm. Jan 23 19:26:14.246646 systemd[1]: Detected architecture x86-64. Jan 23 19:26:14.246665 systemd[1]: Detected first boot. Jan 23 19:26:14.246695 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:26:14.246716 zram_generator::config[1109]: No configuration found. Jan 23 19:26:14.246735 kernel: Guest personality initialized and is inactive Jan 23 19:26:14.246753 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 19:26:14.246772 kernel: Initialized host personality Jan 23 19:26:14.246789 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:26:14.246805 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:26:14.246824 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:26:14.246842 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:26:14.246959 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:26:14.246981 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:26:14.246999 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:26:14.247018 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:26:14.247034 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:26:14.247052 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:26:14.247070 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:26:14.247088 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:26:14.247113 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:26:14.247135 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:26:14.247152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:26:14.247170 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:26:14.247188 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:26:14.247205 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:26:14.247223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:26:14.247240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:26:14.247264 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:26:14.247281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:26:14.247301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:26:14.247321 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:26:14.247340 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:26:14.247360 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:26:14.247532 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:26:14.247554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:26:14.247571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:26:14.247587 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:26:14.247610 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:26:14.247631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:26:14.247650 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:26:14.247669 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:26:14.247687 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:26:14.247704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:26:14.247721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:26:14.247738 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:26:14.247754 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:26:14.247781 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:26:14.247800 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:26:14.247816 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:14.247833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:26:14.247941 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:26:14.247963 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:26:14.247980 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:26:14.247999 systemd[1]: Reached target machines.target - Containers. Jan 23 19:26:14.248021 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:26:14.248037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:26:14.248055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:26:14.248073 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:26:14.248090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:26:14.248107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:26:14.248126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:26:14.248143 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:26:14.248162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:26:14.248184 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:26:14.248202 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:26:14.248220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:26:14.248236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:26:14.248252 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:26:14.248276 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:26:14.248296 kernel: ACPI: bus type drm_connector registered Jan 23 19:26:14.248312 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:26:14.248334 kernel: fuse: init (API version 7.41) Jan 23 19:26:14.248352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:26:14.248528 kernel: loop: module loaded Jan 23 19:26:14.248553 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:26:14.248620 systemd-journald[1194]: Collecting audit messages is disabled. Jan 23 19:26:14.248666 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:26:14.248689 systemd-journald[1194]: Journal started Jan 23 19:26:14.248722 systemd-journald[1194]: Runtime Journal (/run/log/journal/7b2158579a214969886820ec1a275555) is 6M, max 48.1M, 42.1M free. Jan 23 19:26:12.550694 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:26:12.580325 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 19:26:12.582237 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:26:12.583141 systemd[1]: systemd-journald.service: Consumed 3.741s CPU time. Jan 23 19:26:14.312131 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:26:14.367611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:26:14.393530 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:26:14.393633 systemd[1]: Stopped verity-setup.service. Jan 23 19:26:14.414356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:14.451645 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:26:14.478206 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:26:14.488206 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:26:14.498664 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:26:14.513642 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:26:14.528027 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:26:14.538042 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:26:14.550516 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:26:14.567301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:26:14.581238 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:26:14.581816 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:26:14.599659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:26:14.600260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:26:14.616791 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:26:14.617754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:26:14.638023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:26:14.638837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:26:14.650299 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:26:14.651003 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:26:14.661730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:26:14.662198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:26:14.679285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:26:14.692662 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:26:14.710979 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:26:14.727151 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:26:14.744091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:26:14.806293 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:26:14.825600 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:26:14.864154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:26:14.876163 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:26:14.876295 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:26:14.901305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:26:14.921120 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:26:14.951552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:26:14.977533 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:26:15.011519 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:26:15.023826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:26:15.033074 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:26:15.056233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:26:15.062332 systemd-journald[1194]: Time spent on flushing to /var/log/journal/7b2158579a214969886820ec1a275555 is 40.722ms for 1068 entries. Jan 23 19:26:15.062332 systemd-journald[1194]: System Journal (/var/log/journal/7b2158579a214969886820ec1a275555) is 8M, max 195.6M, 187.6M free. Jan 23 19:26:15.195794 systemd-journald[1194]: Received client request to flush runtime journal. Jan 23 19:26:15.088338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:26:15.124736 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:26:15.156352 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:26:15.185680 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:26:15.208091 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:26:15.228337 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:26:15.250995 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:26:15.265516 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 19:26:15.272130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:15.289190 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:26:15.315701 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:26:15.376595 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:26:15.384836 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:26:15.418775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:26:15.433295 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:26:15.435230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:26:15.499586 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 19:26:15.517152 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 23 19:26:15.517178 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 23 19:26:15.538198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:26:15.621249 kernel: loop2: detected capacity change from 0 to 229808 Jan 23 19:26:15.772670 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 19:26:15.833681 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 19:26:15.917038 kernel: loop5: detected capacity change from 0 to 229808 Jan 23 19:26:15.989111 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 19:26:15.991996 (sd-merge)[1252]: Merged extensions into '/usr'. Jan 23 19:26:16.023846 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:26:16.023864 systemd[1]: Reloading... Jan 23 19:26:16.165645 zram_generator::config[1274]: No configuration found. Jan 23 19:26:16.380987 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:26:16.692134 systemd[1]: Reloading finished in 665 ms. Jan 23 19:26:16.746667 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:26:16.767179 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:26:16.783965 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:26:16.848252 systemd[1]: Starting ensure-sysext.service... Jan 23 19:26:16.864331 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:26:16.890752 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:26:16.930784 systemd[1]: Reload requested from client PID 1316 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:26:16.931036 systemd[1]: Reloading... Jan 23 19:26:16.945663 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:26:16.946103 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:26:16.946676 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:26:16.947328 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:26:16.949625 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:26:16.950190 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 23 19:26:16.950315 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 23 19:26:16.962607 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:26:16.962623 systemd-tmpfiles[1317]: Skipping /boot Jan 23 19:26:16.982105 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 23 19:26:16.991705 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:26:16.991790 systemd-tmpfiles[1317]: Skipping /boot Jan 23 19:26:17.087603 zram_generator::config[1345]: No configuration found. Jan 23 19:26:17.417541 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:26:17.476541 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 19:26:17.502527 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:26:17.597232 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:26:17.597666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:26:17.623982 systemd[1]: Reloading finished in 692 ms. Jan 23 19:26:17.675051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:26:17.734519 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 19:26:17.735036 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 19:26:17.739530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:26:17.740710 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 19:26:17.992707 systemd[1]: Finished ensure-sysext.service. Jan 23 19:26:18.022639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:18.029191 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:26:18.148028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:26:18.174134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:26:18.181577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:26:18.206109 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:26:18.246645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:26:18.282863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:26:18.306211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:26:18.311822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:26:18.326484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:26:18.358805 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:26:18.392115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:26:18.413071 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:26:18.449873 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 19:26:18.476991 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:26:18.519547 augenrules[1469]: No rules Jan 23 19:26:18.529623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:26:18.551749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:18.570173 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:26:18.570760 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:26:18.589701 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:26:18.596251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:26:18.602053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:26:18.658218 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:26:18.661812 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:26:18.676354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:26:18.677106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:26:18.716225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:26:18.716864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:26:18.718062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:26:18.737202 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:26:18.776081 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:26:18.779857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:26:18.780110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:26:18.788347 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:26:18.948658 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:26:18.948892 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:26:18.952660 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:26:19.258081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:19.382582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:26:20.014101 systemd-networkd[1458]: lo: Link UP Jan 23 19:26:20.014193 systemd-networkd[1458]: lo: Gained carrier Jan 23 19:26:20.023492 systemd-networkd[1458]: Enumeration completed Jan 23 19:26:20.023791 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:26:20.026649 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:26:20.026734 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:26:20.035718 systemd-networkd[1458]: eth0: Link UP Jan 23 19:26:20.036076 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 19:26:20.036810 systemd-networkd[1458]: eth0: Gained carrier Jan 23 19:26:20.036854 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:26:20.047312 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:26:20.060256 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:26:20.080870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:26:20.096872 systemd-resolved[1463]: Positive Trust Anchors: Jan 23 19:26:20.097242 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:26:20.097287 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:26:20.125697 systemd-resolved[1463]: Defaulting to hostname 'linux'. Jan 23 19:26:20.132865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:26:20.146850 systemd[1]: Reached target network.target - Network. Jan 23 19:26:20.160153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:26:20.172270 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:26:20.184257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:26:20.195220 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:26:20.206003 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:26:20.217304 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:26:20.234706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:26:20.256857 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:26:20.277255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:26:20.277727 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:26:20.304196 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:26:20.327779 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:26:20.353653 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:26:20.363296 systemd-networkd[1458]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:26:20.386173 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Jan 23 19:26:21.712542 systemd-resolved[1463]: Clock change detected. Flushing caches. Jan 23 19:26:21.712708 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 19:26:21.713213 systemd-timesyncd[1465]: Initial clock synchronization to Fri 2026-01-23 19:26:21.712311 UTC. Jan 23 19:26:21.714161 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:26:21.736785 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:26:21.763363 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:26:22.001759 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:26:22.032585 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:26:22.066178 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:26:22.095486 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:26:22.307707 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:26:22.323280 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:26:22.341379 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:26:22.366216 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:26:22.370285 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:26:22.389340 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:26:22.465112 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:26:22.502074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:26:22.543234 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:26:22.571232 systemd-networkd[1458]: eth0: Gained IPv6LL Jan 23 19:26:22.583363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:26:22.599725 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:26:22.622322 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:26:22.628192 jq[1511]: false Jan 23 19:26:22.640386 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:26:22.657568 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:26:22.674189 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:26:22.698034 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:26:22.789287 extend-filesystems[1512]: Found /dev/vda6 Jan 23 19:26:22.800771 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:26:22.812264 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Jan 23 19:26:22.813079 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Jan 23 19:26:22.819284 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:26:22.820545 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:26:22.827729 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:26:22.832367 extend-filesystems[1512]: Found /dev/vda9 Jan 23 19:26:22.852334 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:26:22.873308 extend-filesystems[1512]: Checking size of /dev/vda9 Jan 23 19:26:22.893274 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Jan 23 19:26:22.893274 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:26:22.893274 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Jan 23 19:26:22.876128 oslogin_cache_refresh[1513]: Failure getting users, quitting Jan 23 19:26:22.876155 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:26:22.876230 oslogin_cache_refresh[1513]: Refreshing group entry cache Jan 23 19:26:22.902134 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:26:22.906106 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Jan 23 19:26:22.906106 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:26:22.901818 oslogin_cache_refresh[1513]: Failure getting groups, quitting Jan 23 19:26:22.901836 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:26:22.945522 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:26:22.971269 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:26:22.987230 jq[1533]: true Jan 23 19:26:22.975329 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:26:22.976124 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:26:22.976558 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:26:22.997062 update_engine[1531]: I20260123 19:26:22.996285 1531 main.cc:92] Flatcar Update Engine starting Jan 23 19:26:23.010370 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:26:23.011540 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:26:23.015496 extend-filesystems[1512]: Resized partition /dev/vda9 Jan 23 19:26:23.030225 extend-filesystems[1539]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:26:23.057785 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:26:23.059791 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:26:23.088037 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 19:26:23.217685 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:26:23.219715 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:26:23.248700 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 19:26:23.269134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:26:23.299552 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:26:23.332137 tar[1541]: linux-amd64/LICENSE Jan 23 19:26:23.332137 tar[1541]: linux-amd64/helm Jan 23 19:26:23.342496 jq[1543]: true Jan 23 19:26:23.359202 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 19:26:23.441244 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 19:26:23.441244 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 19:26:23.441244 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 19:26:23.570989 kernel: kvm_amd: TSC scaling supported Jan 23 19:26:23.571034 kernel: kvm_amd: Nested Virtualization enabled Jan 23 19:26:23.571054 kernel: kvm_amd: Nested Paging enabled Jan 23 19:26:23.571073 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 19:26:23.571091 kernel: kvm_amd: PMU virtualization is disabled Jan 23 19:26:23.571150 update_engine[1531]: I20260123 19:26:23.532061 1531 update_check_scheduler.cc:74] Next update check in 7m9s Jan 23 19:26:23.486254 dbus-daemon[1509]: [system] SELinux support is enabled Jan 23 19:26:23.447144 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:26:23.574386 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Jan 23 19:26:23.449070 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:26:23.527003 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 19:26:23.527043 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:26:23.528088 systemd-logind[1526]: New seat seat0. Jan 23 19:26:23.591056 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:26:23.608836 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:26:23.668254 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:26:23.670203 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 19:26:23.668292 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:26:23.700400 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:26:23.700550 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:26:23.721126 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:26:23.729833 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:26:23.782504 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:26:23.810378 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 19:26:23.811075 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 19:26:23.833001 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:26:23.945830 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:26:23.947033 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:26:23.963774 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 19:26:23.968546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:26:24.027747 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:26:24.070649 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:26:24.088053 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:43122.service - OpenSSH per-connection server daemon (10.0.0.1:43122). Jan 23 19:26:24.243043 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:26:24.243525 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:26:24.260050 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:26:24.376283 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:26:24.384654 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:26:24.393634 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:26:24.410678 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:26:24.448258 containerd[1545]: time="2026-01-23T19:26:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:26:24.448258 containerd[1545]: time="2026-01-23T19:26:24.438727547Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:26:24.465771 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.500193235Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.032µs" Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.500320291Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.500345509Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.500696344Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.500722583Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.500791271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.501059892Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:26:24.501271 containerd[1545]: time="2026-01-23T19:26:24.501080491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.501639063Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.501658750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.501673447Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.501684057Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.501800505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.502257929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.502297593Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:26:24.502360 containerd[1545]: time="2026-01-23T19:26:24.502312160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:26:24.503135 containerd[1545]: time="2026-01-23T19:26:24.502370478Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:26:24.503135 containerd[1545]: time="2026-01-23T19:26:24.502735720Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:26:24.503135 containerd[1545]: time="2026-01-23T19:26:24.502828343Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:26:24.541646 containerd[1545]: time="2026-01-23T19:26:24.541585769Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:26:24.545289 containerd[1545]: time="2026-01-23T19:26:24.545257613Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:26:24.545402 containerd[1545]: time="2026-01-23T19:26:24.545381364Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:26:24.545590 containerd[1545]: time="2026-01-23T19:26:24.545566760Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:26:24.545674 containerd[1545]: time="2026-01-23T19:26:24.545653813Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:26:24.545751 containerd[1545]: time="2026-01-23T19:26:24.545731017Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:26:24.546036 containerd[1545]: time="2026-01-23T19:26:24.546007222Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:26:24.546126 containerd[1545]: time="2026-01-23T19:26:24.546106157Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:26:24.546216 containerd[1545]: time="2026-01-23T19:26:24.546193220Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:26:24.546298 containerd[1545]: time="2026-01-23T19:26:24.546277938Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:26:24.546392 containerd[1545]: time="2026-01-23T19:26:24.546371112Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:26:24.546587 containerd[1545]: time="2026-01-23T19:26:24.546563561Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:26:24.547974 containerd[1545]: time="2026-01-23T19:26:24.547945060Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:26:24.548096 containerd[1545]: time="2026-01-23T19:26:24.548070023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:26:24.550720 containerd[1545]: time="2026-01-23T19:26:24.550691045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:26:24.550824 containerd[1545]: time="2026-01-23T19:26:24.550801972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:26:24.551079 containerd[1545]: time="2026-01-23T19:26:24.551055565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:26:24.551160 containerd[1545]: time="2026-01-23T19:26:24.551140283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:26:24.551260 containerd[1545]: time="2026-01-23T19:26:24.551237555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:26:24.551358 containerd[1545]: time="2026-01-23T19:26:24.551336239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:26:24.551549 containerd[1545]: time="2026-01-23T19:26:24.551522036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:26:24.551648 containerd[1545]: time="2026-01-23T19:26:24.551625699Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:26:24.551736 containerd[1545]: time="2026-01-23T19:26:24.551715397Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:26:24.552052 containerd[1545]: time="2026-01-23T19:26:24.552023713Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:26:24.552135 containerd[1545]: time="2026-01-23T19:26:24.552117407Z" level=info msg="Start snapshots syncer" Jan 23 19:26:24.552241 containerd[1545]: time="2026-01-23T19:26:24.552218426Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:26:24.557515 containerd[1545]: time="2026-01-23T19:26:24.557356897Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:26:24.558654 containerd[1545]: time="2026-01-23T19:26:24.558617560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:26:24.558825 containerd[1545]: time="2026-01-23T19:26:24.558797005Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:26:24.559331 containerd[1545]: time="2026-01-23T19:26:24.559303671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:26:24.559531 containerd[1545]: time="2026-01-23T19:26:24.559412053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:26:24.560157 containerd[1545]: time="2026-01-23T19:26:24.560130253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:26:24.560251 containerd[1545]: time="2026-01-23T19:26:24.560229639Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:26:24.560332 containerd[1545]: time="2026-01-23T19:26:24.560310550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:26:24.560398 containerd[1545]: time="2026-01-23T19:26:24.560382013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:26:24.560665 containerd[1545]: time="2026-01-23T19:26:24.560639104Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:26:24.562046 containerd[1545]: time="2026-01-23T19:26:24.562018608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:26:24.562140 containerd[1545]: time="2026-01-23T19:26:24.562119707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:26:24.562224 containerd[1545]: time="2026-01-23T19:26:24.562203624Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:26:24.562339 containerd[1545]: time="2026-01-23T19:26:24.562318298Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:26:24.562525 containerd[1545]: time="2026-01-23T19:26:24.562405481Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:26:24.562634 containerd[1545]: time="2026-01-23T19:26:24.562605804Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:26:24.562721 containerd[1545]: time="2026-01-23T19:26:24.562694279Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:26:24.562795 containerd[1545]: time="2026-01-23T19:26:24.562774679Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:26:24.563045 containerd[1545]: time="2026-01-23T19:26:24.563014267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:26:24.563151 containerd[1545]: time="2026-01-23T19:26:24.563124913Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:26:24.564131 containerd[1545]: time="2026-01-23T19:26:24.564104822Z" level=info msg="runtime interface created" Jan 23 19:26:24.564209 containerd[1545]: time="2026-01-23T19:26:24.564191363Z" level=info msg="created NRI interface" Jan 23 19:26:24.564411 containerd[1545]: time="2026-01-23T19:26:24.564384845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:26:24.565098 containerd[1545]: time="2026-01-23T19:26:24.565072178Z" level=info msg="Connect containerd service" Jan 23 19:26:24.565225 containerd[1545]: time="2026-01-23T19:26:24.565197622Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:26:24.579220 containerd[1545]: time="2026-01-23T19:26:24.576742423Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:26:24.641664 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 43122 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:24.649768 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:24.703347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:26:24.725299 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:26:24.798195 systemd-logind[1526]: New session 1 of user core. Jan 23 19:26:24.829387 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:26:24.866368 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:26:24.931004 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:26:24.950626 systemd-logind[1526]: New session c1 of user core. Jan 23 19:26:25.062685 containerd[1545]: time="2026-01-23T19:26:25.062635487Z" level=info msg="Start subscribing containerd event" Jan 23 19:26:25.064007 containerd[1545]: time="2026-01-23T19:26:25.063810190Z" level=info msg="Start recovering state" Jan 23 19:26:25.066259 containerd[1545]: time="2026-01-23T19:26:25.066233263Z" level=info msg="Start event monitor" Jan 23 19:26:25.067668 containerd[1545]: time="2026-01-23T19:26:25.067398758Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:26:25.067786 containerd[1545]: time="2026-01-23T19:26:25.067759802Z" level=info msg="Start streaming server" Jan 23 19:26:25.068087 containerd[1545]: time="2026-01-23T19:26:25.068060764Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:26:25.070591 containerd[1545]: time="2026-01-23T19:26:25.070563265Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:26:25.071360 containerd[1545]: time="2026-01-23T19:26:25.071078599Z" level=info msg="runtime interface starting up..." Jan 23 19:26:25.071360 containerd[1545]: time="2026-01-23T19:26:25.071295063Z" level=info msg="starting plugins..." Jan 23 19:26:25.071360 containerd[1545]: time="2026-01-23T19:26:25.071329778Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:26:25.073311 containerd[1545]: time="2026-01-23T19:26:25.073282850Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:26:25.100783 containerd[1545]: time="2026-01-23T19:26:25.076202049Z" level=info msg="containerd successfully booted in 0.643043s" Jan 23 19:26:25.076370 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:26:25.130093 tar[1541]: linux-amd64/README.md Jan 23 19:26:25.188534 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:26:25.228013 kernel: EDAC MC: Ver: 3.0.0 Jan 23 19:26:25.362579 systemd[1638]: Queued start job for default target default.target. Jan 23 19:26:25.381773 systemd[1638]: Created slice app.slice - User Application Slice. Jan 23 19:26:25.381824 systemd[1638]: Reached target paths.target - Paths. Jan 23 19:26:25.382041 systemd[1638]: Reached target timers.target - Timers. Jan 23 19:26:25.387206 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:26:25.432156 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:26:25.432538 systemd[1638]: Reached target sockets.target - Sockets. Jan 23 19:26:25.432696 systemd[1638]: Reached target basic.target - Basic System. Jan 23 19:26:25.432771 systemd[1638]: Reached target default.target - Main User Target. Jan 23 19:26:25.432826 systemd[1638]: Startup finished in 377ms. Jan 23 19:26:25.435193 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:26:25.474539 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:26:25.589655 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:48348.service - OpenSSH per-connection server daemon (10.0.0.1:48348). Jan 23 19:26:25.762664 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 48348 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:25.766810 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:25.788163 systemd-logind[1526]: New session 2 of user core. Jan 23 19:26:25.806386 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:26:25.923250 sshd[1661]: Connection closed by 10.0.0.1 port 48348 Jan 23 19:26:25.922217 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:25.956985 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:48348.service: Deactivated successfully. Jan 23 19:26:25.962614 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:26:25.971283 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:26:25.977407 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:48350.service - OpenSSH per-connection server daemon (10.0.0.1:48350). Jan 23 19:26:25.995560 systemd-logind[1526]: Removed session 2. Jan 23 19:26:26.132744 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 48350 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:26.136266 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:26.161987 systemd-logind[1526]: New session 3 of user core. Jan 23 19:26:26.180621 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:26:26.320198 sshd[1670]: Connection closed by 10.0.0.1 port 48350 Jan 23 19:26:26.321044 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:26.348990 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:48350.service: Deactivated successfully. Jan 23 19:26:26.369650 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:26:26.383153 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:26:26.396549 systemd-logind[1526]: Removed session 3. Jan 23 19:26:28.063602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:26:28.087097 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:26:28.102719 systemd[1]: Startup finished in 17.252s (kernel) + 46.068s (initrd) + 18.761s (userspace) = 1min 22.081s. Jan 23 19:26:28.121792 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:26:32.084719 kubelet[1680]: E0123 19:26:32.081992 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:26:32.115001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:26:32.115366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:26:32.122735 systemd[1]: kubelet.service: Consumed 4.065s CPU time, 272.4M memory peak. Jan 23 19:26:36.391683 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:59380.service - OpenSSH per-connection server daemon (10.0.0.1:59380). Jan 23 19:26:36.680241 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 59380 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:36.691429 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:36.733799 systemd-logind[1526]: New session 4 of user core. Jan 23 19:26:36.754152 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:26:36.866645 sshd[1697]: Connection closed by 10.0.0.1 port 59380 Jan 23 19:26:36.867050 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:36.891108 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:59380.service: Deactivated successfully. Jan 23 19:26:36.898440 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:26:36.911337 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:26:36.917136 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:59396.service - OpenSSH per-connection server daemon (10.0.0.1:59396). Jan 23 19:26:36.921125 systemd-logind[1526]: Removed session 4. Jan 23 19:26:37.287421 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:37.294783 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:37.357413 systemd-logind[1526]: New session 5 of user core. Jan 23 19:26:37.370260 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:26:37.504033 sshd[1706]: Connection closed by 10.0.0.1 port 59396 Jan 23 19:26:37.507501 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:37.562730 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:59396.service: Deactivated successfully. Jan 23 19:26:37.569382 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:26:37.577006 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:26:37.594298 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:59402.service - OpenSSH per-connection server daemon (10.0.0.1:59402). Jan 23 19:26:37.604283 systemd-logind[1526]: Removed session 5. Jan 23 19:26:37.990793 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 59402 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:37.995443 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:38.039700 systemd-logind[1526]: New session 6 of user core. Jan 23 19:26:38.062295 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:26:38.183303 sshd[1715]: Connection closed by 10.0.0.1 port 59402 Jan 23 19:26:38.183453 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:38.231708 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:59402.service: Deactivated successfully. Jan 23 19:26:38.238724 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:26:38.247219 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:26:38.258394 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:59416.service - OpenSSH per-connection server daemon (10.0.0.1:59416). Jan 23 19:26:38.285691 systemd-logind[1526]: Removed session 6. Jan 23 19:26:38.396367 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 59416 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:26:38.398227 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:38.422028 systemd-logind[1526]: New session 7 of user core. Jan 23 19:26:38.432421 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:26:38.721682 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:26:38.724047 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:26:42.245273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:26:42.587562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:26:43.457103 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:26:43.565742 (dockerd)[1747]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:26:45.378040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:26:45.487334 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:26:48.145278 kubelet[1755]: E0123 19:26:48.143402 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:26:48.180635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:26:48.184478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:26:48.186035 systemd[1]: kubelet.service: Consumed 3.655s CPU time, 111M memory peak. Jan 23 19:26:52.418348 dockerd[1747]: time="2026-01-23T19:26:52.416585011Z" level=info msg="Starting up" Jan 23 19:26:52.444149 dockerd[1747]: time="2026-01-23T19:26:52.441676865Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:26:53.755502 dockerd[1747]: time="2026-01-23T19:26:53.736085417Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:26:54.684790 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4027506757-merged.mount: Deactivated successfully. Jan 23 19:26:54.737666 systemd[1]: var-lib-docker-metacopy\x2dcheck3971363152-merged.mount: Deactivated successfully. Jan 23 19:26:55.042069 dockerd[1747]: time="2026-01-23T19:26:55.029232710Z" level=info msg="Loading containers: start." Jan 23 19:26:55.195393 kernel: Initializing XFRM netlink socket Jan 23 19:26:58.380416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:26:58.399739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:00.492211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:00.566309 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:01.444821 kubelet[1917]: E0123 19:27:01.444658 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:01.461289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:01.462238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:01.465656 systemd[1]: kubelet.service: Consumed 1.695s CPU time, 111.1M memory peak. Jan 23 19:27:02.219265 systemd-networkd[1458]: docker0: Link UP Jan 23 19:27:02.453347 dockerd[1747]: time="2026-01-23T19:27:02.452264661Z" level=info msg="Loading containers: done." Jan 23 19:27:02.833718 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck283905836-merged.mount: Deactivated successfully. Jan 23 19:27:02.857773 dockerd[1747]: time="2026-01-23T19:27:02.851772175Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:27:02.861191 dockerd[1747]: time="2026-01-23T19:27:02.859292856Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:27:02.861191 dockerd[1747]: time="2026-01-23T19:27:02.859716767Z" level=info msg="Initializing buildkit" Jan 23 19:27:03.448192 dockerd[1747]: time="2026-01-23T19:27:03.447183570Z" level=info msg="Completed buildkit initialization" Jan 23 19:27:03.519125 dockerd[1747]: time="2026-01-23T19:27:03.515787372Z" level=info msg="Daemon has completed initialization" Jan 23 19:27:03.523299 dockerd[1747]: time="2026-01-23T19:27:03.521077286Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:27:03.523402 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:27:09.041246 update_engine[1531]: I20260123 19:27:09.038360 1531 update_attempter.cc:509] Updating boot flags... Jan 23 19:27:11.655816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 19:27:11.718337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:12.180287 containerd[1545]: time="2026-01-23T19:27:12.174728905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 19:27:14.574213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:14.656225 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:16.150704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379538535.mount: Deactivated successfully. Jan 23 19:27:16.444741 kubelet[2016]: E0123 19:27:16.429293 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:16.499049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:16.502073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:16.505816 systemd[1]: kubelet.service: Consumed 1.976s CPU time, 110.2M memory peak. Jan 23 19:27:26.644620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 19:27:26.659385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:28.984325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:29.181639 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:31.417064 kubelet[2091]: E0123 19:27:31.415772 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:31.442130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:31.442426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:31.448742 systemd[1]: kubelet.service: Consumed 3.182s CPU time, 114.2M memory peak. Jan 23 19:27:41.789320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 19:27:41.985102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:43.161539 containerd[1545]: time="2026-01-23T19:27:43.155323934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:43.176809 containerd[1545]: time="2026-01-23T19:27:43.164369905Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 19:27:43.181019 containerd[1545]: time="2026-01-23T19:27:43.180173217Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:43.290242 containerd[1545]: time="2026-01-23T19:27:43.286377946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:43.389538 containerd[1545]: time="2026-01-23T19:27:43.386432513Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 31.202019375s" Jan 23 19:27:43.389538 containerd[1545]: time="2026-01-23T19:27:43.387109435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 19:27:43.440782 containerd[1545]: time="2026-01-23T19:27:43.423086961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 19:27:45.283129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:45.450216 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:48.834366 kubelet[2108]: E0123 19:27:48.833382 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:48.854319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:48.856377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:48.864566 systemd[1]: kubelet.service: Consumed 2.916s CPU time, 112.3M memory peak. Jan 23 19:27:59.252818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 19:27:59.265557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:01.032200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:01.060296 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:01.413158 kubelet[2129]: E0123 19:28:01.411797 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:01.426349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:01.430235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:01.431728 systemd[1]: kubelet.service: Consumed 1.130s CPU time, 110.7M memory peak. Jan 23 19:28:04.959289 containerd[1545]: time="2026-01-23T19:28:04.958221341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:05.173462 containerd[1545]: time="2026-01-23T19:28:05.048393697Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 19:28:05.173462 containerd[1545]: time="2026-01-23T19:28:05.165299741Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:05.443758 containerd[1545]: time="2026-01-23T19:28:05.443394413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:05.448439 containerd[1545]: time="2026-01-23T19:28:05.447118406Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 22.023973127s" Jan 23 19:28:05.448439 containerd[1545]: time="2026-01-23T19:28:05.447371808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 19:28:05.462446 containerd[1545]: time="2026-01-23T19:28:05.462384220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 19:28:11.657651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 19:28:11.689199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:13.511660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:13.546165 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:14.871624 kubelet[2150]: E0123 19:28:14.870131 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:14.889531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:14.889810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:14.891383 systemd[1]: kubelet.service: Consumed 1.846s CPU time, 110.6M memory peak. Jan 23 19:28:18.345958 containerd[1545]: time="2026-01-23T19:28:18.345691413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:18.356959 containerd[1545]: time="2026-01-23T19:28:18.356561119Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 19:28:18.361023 containerd[1545]: time="2026-01-23T19:28:18.360766476Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:18.379939 containerd[1545]: time="2026-01-23T19:28:18.379615134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:18.382284 containerd[1545]: time="2026-01-23T19:28:18.381436091Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 12.918235847s" Jan 23 19:28:18.382284 containerd[1545]: time="2026-01-23T19:28:18.381535414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 19:28:18.383963 containerd[1545]: time="2026-01-23T19:28:18.383931485Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 19:28:25.161762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 23 19:28:25.345723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:27.152134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:27.263659 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:27.265482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31010637.mount: Deactivated successfully. Jan 23 19:28:28.540824 kubelet[2172]: E0123 19:28:28.539993 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:28.555110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:28.555458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:28.559344 systemd[1]: kubelet.service: Consumed 1.442s CPU time, 110.5M memory peak. Jan 23 19:28:36.747126 containerd[1545]: time="2026-01-23T19:28:36.746969192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:36.755916 containerd[1545]: time="2026-01-23T19:28:36.755057734Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 19:28:36.774649 containerd[1545]: time="2026-01-23T19:28:36.772637165Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:36.792447 containerd[1545]: time="2026-01-23T19:28:36.792376992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:36.794979 containerd[1545]: time="2026-01-23T19:28:36.794815391Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 18.410606367s" Jan 23 19:28:36.795247 containerd[1545]: time="2026-01-23T19:28:36.795109814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 19:28:36.806389 containerd[1545]: time="2026-01-23T19:28:36.806337587Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 19:28:38.054419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841655777.mount: Deactivated successfully. Jan 23 19:28:38.633652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 23 19:28:38.647445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:40.400258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:40.834330 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:41.080004 kubelet[2203]: E0123 19:28:41.079384 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:41.085438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:41.086001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:41.087471 systemd[1]: kubelet.service: Consumed 1.431s CPU time, 110.6M memory peak. Jan 23 19:28:49.432164 containerd[1545]: time="2026-01-23T19:28:49.401560902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:49.432164 containerd[1545]: time="2026-01-23T19:28:49.426705521Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 19:28:49.438441 containerd[1545]: time="2026-01-23T19:28:49.434362551Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:49.453336 containerd[1545]: time="2026-01-23T19:28:49.453105787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:49.458212 containerd[1545]: time="2026-01-23T19:28:49.457347375Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 12.650695578s" Jan 23 19:28:49.458212 containerd[1545]: time="2026-01-23T19:28:49.457818356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 19:28:49.470500 containerd[1545]: time="2026-01-23T19:28:49.470293756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 19:28:50.518699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793486267.mount: Deactivated successfully. Jan 23 19:28:50.599213 containerd[1545]: time="2026-01-23T19:28:50.593367548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:28:50.626229 containerd[1545]: time="2026-01-23T19:28:50.626130101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 19:28:50.642232 containerd[1545]: time="2026-01-23T19:28:50.641688700Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:28:50.671342 containerd[1545]: time="2026-01-23T19:28:50.669454994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:28:50.673663 containerd[1545]: time="2026-01-23T19:28:50.673261579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.202920855s" Jan 23 19:28:50.673663 containerd[1545]: time="2026-01-23T19:28:50.673371412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 19:28:50.682741 containerd[1545]: time="2026-01-23T19:28:50.676565912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 19:28:51.129549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 23 19:28:51.141270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:52.075249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215853144.mount: Deactivated successfully. Jan 23 19:28:52.180343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:52.506694 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:54.048676 kubelet[2268]: E0123 19:28:54.046709 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:54.064840 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:54.065297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:54.249375 systemd[1]: kubelet.service: Consumed 1.575s CPU time, 107.5M memory peak. Jan 23 19:29:04.139735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 23 19:29:04.158087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:29:05.282103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:05.346615 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:29:06.633325 kubelet[2333]: E0123 19:29:06.632638 2333 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:29:06.651237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:29:06.651679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:29:06.652706 systemd[1]: kubelet.service: Consumed 1.264s CPU time, 110.8M memory peak. Jan 23 19:29:16.548826 containerd[1545]: time="2026-01-23T19:29:16.544032643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:16.570608 containerd[1545]: time="2026-01-23T19:29:16.567161459Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 19:29:16.573648 containerd[1545]: time="2026-01-23T19:29:16.573275955Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:16.606054 containerd[1545]: time="2026-01-23T19:29:16.603723065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:16.609014 containerd[1545]: time="2026-01-23T19:29:16.608053893Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 25.931344475s" Jan 23 19:29:16.609014 containerd[1545]: time="2026-01-23T19:29:16.608098484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 19:29:16.893379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 23 19:29:16.921063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:29:18.775331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:18.796425 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:29:19.351278 kubelet[2368]: E0123 19:29:19.350732 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:29:19.365015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:29:19.365784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:29:19.369015 systemd[1]: kubelet.service: Consumed 1.521s CPU time, 108.7M memory peak. Jan 23 19:29:27.912480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:27.913009 systemd[1]: kubelet.service: Consumed 1.521s CPU time, 108.7M memory peak. Jan 23 19:29:27.925694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:29:28.113263 systemd[1]: Reload requested from client PID 2393 ('systemctl') (unit session-7.scope)... Jan 23 19:29:28.114578 systemd[1]: Reloading... Jan 23 19:29:28.416987 zram_generator::config[2436]: No configuration found. Jan 23 19:29:29.213285 systemd[1]: Reloading finished in 1095 ms. Jan 23 19:29:29.442270 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:29:29.443283 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:29:29.445657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:29.446139 systemd[1]: kubelet.service: Consumed 357ms CPU time, 98.3M memory peak. Jan 23 19:29:29.471581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:29:31.577703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:31.641465 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:29:31.963220 kubelet[2485]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:29:31.966588 kubelet[2485]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:29:31.966588 kubelet[2485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:29:31.966588 kubelet[2485]: I0123 19:29:31.965124 2485 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:29:33.557386 kubelet[2485]: I0123 19:29:33.557318 2485 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:29:33.557386 kubelet[2485]: I0123 19:29:33.557421 2485 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:29:33.585004 kubelet[2485]: I0123 19:29:33.580209 2485 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:29:34.184605 kubelet[2485]: E0123 19:29:34.180420 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:29:34.192808 kubelet[2485]: I0123 19:29:34.187806 2485 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:29:34.702449 kubelet[2485]: I0123 19:29:34.701687 2485 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:29:35.299050 kubelet[2485]: I0123 19:29:35.296628 2485 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:29:35.310731 kubelet[2485]: I0123 19:29:35.308159 2485 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:29:35.333025 kubelet[2485]: I0123 19:29:35.308316 2485 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:29:35.333025 kubelet[2485]: I0123 19:29:35.332173 2485 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:29:35.333025 kubelet[2485]: I0123 19:29:35.332411 2485 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:29:35.336969 kubelet[2485]: I0123 19:29:35.335418 2485 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:29:35.352151 kubelet[2485]: I0123 19:29:35.350704 2485 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:29:35.352151 kubelet[2485]: I0123 19:29:35.350754 2485 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:29:35.352151 kubelet[2485]: I0123 19:29:35.350796 2485 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:29:35.352151 kubelet[2485]: I0123 19:29:35.351057 2485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:29:35.394800 kubelet[2485]: E0123 19:29:35.391100 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:29:35.446991 kubelet[2485]: E0123 19:29:35.438213 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:29:35.446991 kubelet[2485]: I0123 19:29:35.445290 2485 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:29:35.446991 kubelet[2485]: I0123 19:29:35.446641 2485 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:29:35.455812 kubelet[2485]: W0123 19:29:35.454708 2485 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:29:35.480054 kubelet[2485]: I0123 19:29:35.479285 2485 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:29:35.480054 kubelet[2485]: I0123 19:29:35.479605 2485 server.go:1289] "Started kubelet" Jan 23 19:29:35.480783 kubelet[2485]: I0123 19:29:35.480736 2485 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:29:35.495741 kubelet[2485]: I0123 19:29:35.490597 2485 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:29:35.497035 kubelet[2485]: I0123 19:29:35.496231 2485 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:29:35.533600 kubelet[2485]: I0123 19:29:35.533281 2485 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:29:35.543686 kubelet[2485]: I0123 19:29:35.543445 2485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:29:35.546011 kubelet[2485]: I0123 19:29:35.545479 2485 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:29:35.550059 kubelet[2485]: I0123 19:29:35.548715 2485 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:29:35.551082 kubelet[2485]: E0123 19:29:35.550749 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:35.554150 kubelet[2485]: I0123 19:29:35.552121 2485 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:29:35.554150 kubelet[2485]: I0123 19:29:35.552304 2485 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:29:35.558042 kubelet[2485]: E0123 19:29:35.557816 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:29:35.560233 kubelet[2485]: E0123 19:29:35.559830 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Jan 23 19:29:35.567049 kubelet[2485]: I0123 19:29:35.566689 2485 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:29:35.568254 kubelet[2485]: E0123 19:29:35.558345 2485 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d72e96b496f0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:29:35.479394063 +0000 UTC m=+3.803640694,LastTimestamp:2026-01-23 19:29:35.479394063 +0000 UTC m=+3.803640694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:29:35.571453 kubelet[2485]: E0123 19:29:35.571428 2485 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:29:35.582150 kubelet[2485]: I0123 19:29:35.581758 2485 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:29:35.582150 kubelet[2485]: I0123 19:29:35.582085 2485 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:29:35.659988 kubelet[2485]: E0123 19:29:35.657665 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:35.838297 kubelet[2485]: E0123 19:29:35.759636 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:35.838297 kubelet[2485]: E0123 19:29:35.762114 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Jan 23 19:29:36.350148 kubelet[2485]: E0123 19:29:36.345259 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:36.359625 kubelet[2485]: E0123 19:29:36.354134 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:29:36.363269 kubelet[2485]: E0123 19:29:36.362292 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Jan 23 19:29:36.447201 kubelet[2485]: E0123 19:29:36.446119 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:36.452347 kubelet[2485]: E0123 19:29:36.452168 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:29:36.463480 kubelet[2485]: I0123 19:29:36.460815 2485 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:29:36.463480 kubelet[2485]: I0123 19:29:36.460832 2485 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:29:36.463480 kubelet[2485]: I0123 19:29:36.461022 2485 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:29:36.479319 kubelet[2485]: I0123 19:29:36.478030 2485 policy_none.go:49] "None policy: Start" Jan 23 19:29:36.479319 kubelet[2485]: I0123 19:29:36.478228 2485 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:29:36.479319 kubelet[2485]: I0123 19:29:36.478404 2485 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:29:36.702112 kubelet[2485]: I0123 19:29:36.567127 2485 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:29:36.702112 kubelet[2485]: E0123 19:29:36.569268 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:36.702112 kubelet[2485]: I0123 19:29:36.600250 2485 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:29:36.702112 kubelet[2485]: I0123 19:29:36.600372 2485 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:29:36.702112 kubelet[2485]: I0123 19:29:36.604347 2485 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:29:36.702112 kubelet[2485]: I0123 19:29:36.604747 2485 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:29:36.702112 kubelet[2485]: E0123 19:29:36.605302 2485 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:29:36.702112 kubelet[2485]: E0123 19:29:36.664216 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:29:36.692760 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:29:36.703335 kubelet[2485]: E0123 19:29:36.664289 2485 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d72e96b496f0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:29:35.479394063 +0000 UTC m=+3.803640694,LastTimestamp:2026-01-23 19:29:35.479394063 +0000 UTC m=+3.803640694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:29:36.703335 kubelet[2485]: E0123 19:29:36.671788 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:36.720087 kubelet[2485]: E0123 19:29:36.719243 2485 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:29:36.741695 kubelet[2485]: E0123 19:29:36.741423 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:29:36.760152 kubelet[2485]: E0123 19:29:36.759216 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:29:36.774720 kubelet[2485]: E0123 19:29:36.773594 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:36.774420 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:29:36.822596 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:29:36.888597 kubelet[2485]: E0123 19:29:36.876363 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:36.924775 kubelet[2485]: E0123 19:29:36.922271 2485 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:29:36.989241 kubelet[2485]: E0123 19:29:36.988153 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:37.024286 kubelet[2485]: E0123 19:29:37.023995 2485 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:29:37.027107 kubelet[2485]: I0123 19:29:37.024648 2485 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:29:37.027107 kubelet[2485]: I0123 19:29:37.024671 2485 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:29:37.035190 kubelet[2485]: I0123 19:29:37.027410 2485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:29:37.047754 kubelet[2485]: E0123 19:29:37.046432 2485 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:29:37.047754 kubelet[2485]: E0123 19:29:37.046645 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:29:37.174759 kubelet[2485]: E0123 19:29:37.173630 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Jan 23 19:29:37.190240 kubelet[2485]: I0123 19:29:37.185314 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:37.190240 kubelet[2485]: E0123 19:29:37.186309 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 23 19:29:37.367182 kubelet[2485]: I0123 19:29:37.365388 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:29:37.397253 kubelet[2485]: I0123 19:29:37.395640 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:37.397253 kubelet[2485]: E0123 19:29:37.396199 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 23 19:29:37.525815 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 19:29:37.614349 kubelet[2485]: I0123 19:29:37.612058 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:37.614349 kubelet[2485]: I0123 19:29:37.612780 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:37.614349 kubelet[2485]: I0123 19:29:37.612818 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:37.614349 kubelet[2485]: I0123 19:29:37.613590 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:37.614349 kubelet[2485]: I0123 19:29:37.613623 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f160fcaa67625e2ae9b60ab30643b3e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f160fcaa67625e2ae9b60ab30643b3e4\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:37.615367 kubelet[2485]: I0123 19:29:37.613646 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:37.615367 kubelet[2485]: I0123 19:29:37.613667 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f160fcaa67625e2ae9b60ab30643b3e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f160fcaa67625e2ae9b60ab30643b3e4\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:37.615367 kubelet[2485]: I0123 19:29:37.613687 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f160fcaa67625e2ae9b60ab30643b3e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f160fcaa67625e2ae9b60ab30643b3e4\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:37.677161 kubelet[2485]: E0123 19:29:37.676817 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:37.681642 containerd[1545]: time="2026-01-23T19:29:37.681451913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 19:29:37.687029 systemd[1]: Created slice kubepods-burstable-podf160fcaa67625e2ae9b60ab30643b3e4.slice - libcontainer container kubepods-burstable-podf160fcaa67625e2ae9b60ab30643b3e4.slice. Jan 23 19:29:37.708288 kubelet[2485]: E0123 19:29:37.708028 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:37.729823 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 19:29:37.746780 kubelet[2485]: E0123 19:29:37.745784 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:37.748483 containerd[1545]: time="2026-01-23T19:29:37.747113446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 19:29:37.843233 kubelet[2485]: I0123 19:29:37.836772 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:37.846450 kubelet[2485]: E0123 19:29:37.846408 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 23 19:29:38.001387 containerd[1545]: time="2026-01-23T19:29:37.994389962Z" level=info msg="connecting to shim 0cd29b31e04fbeb711829e0ece13956beb913e1038d7ae37a75cd952619bf544" address="unix:///run/containerd/s/8cca70074c9412a7e37300b02bbaa64a262ddd6ab62f950f9d696df5a4233df7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:38.023235 containerd[1545]: time="2026-01-23T19:29:38.020355077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f160fcaa67625e2ae9b60ab30643b3e4,Namespace:kube-system,Attempt:0,}" Jan 23 19:29:38.165344 kubelet[2485]: E0123 19:29:38.165286 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:29:38.380073 containerd[1545]: time="2026-01-23T19:29:38.373426288Z" level=info msg="connecting to shim 42042de4484cffc49f2db0548e4be2c8df44b31facb4ddc4a6c248aa249cea95" address="unix:///run/containerd/s/a41aa170eeb85e8ca21d32af72ec3afd3f38263dd50093bef67e08cc8b0c1d1b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:39.173657 kubelet[2485]: E0123 19:29:39.173464 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:29:39.179739 kubelet[2485]: E0123 19:29:39.178039 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="3.2s" Jan 23 19:29:39.180148 kubelet[2485]: E0123 19:29:39.178416 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:29:39.185176 kubelet[2485]: I0123 19:29:39.183628 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:39.185176 kubelet[2485]: E0123 19:29:39.184371 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 23 19:29:39.297226 systemd[1]: Started cri-containerd-0cd29b31e04fbeb711829e0ece13956beb913e1038d7ae37a75cd952619bf544.scope - libcontainer container 0cd29b31e04fbeb711829e0ece13956beb913e1038d7ae37a75cd952619bf544. Jan 23 19:29:39.491322 containerd[1545]: time="2026-01-23T19:29:39.490740197Z" level=info msg="connecting to shim e7246c94ea30a27601a8f025555a43d94a39b754628e9abcedc4fb666fdb6622" address="unix:///run/containerd/s/4977d4e75f401c2ae5dd799b3bbb1e392aad4c9808d9448d9b9245671c547372" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:40.022774 kubelet[2485]: E0123 19:29:40.021288 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:29:40.340731 systemd[1]: Started cri-containerd-42042de4484cffc49f2db0548e4be2c8df44b31facb4ddc4a6c248aa249cea95.scope - libcontainer container 42042de4484cffc49f2db0548e4be2c8df44b31facb4ddc4a6c248aa249cea95. Jan 23 19:29:40.635604 kubelet[2485]: E0123 19:29:40.635400 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:29:40.833809 kubelet[2485]: I0123 19:29:40.831130 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:40.846252 kubelet[2485]: E0123 19:29:40.846172 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 23 19:29:40.857476 containerd[1545]: time="2026-01-23T19:29:40.857422347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cd29b31e04fbeb711829e0ece13956beb913e1038d7ae37a75cd952619bf544\"" Jan 23 19:29:40.902358 systemd[1]: Started cri-containerd-e7246c94ea30a27601a8f025555a43d94a39b754628e9abcedc4fb666fdb6622.scope - libcontainer container e7246c94ea30a27601a8f025555a43d94a39b754628e9abcedc4fb666fdb6622. Jan 23 19:29:40.914100 containerd[1545]: time="2026-01-23T19:29:40.912710711Z" level=info msg="CreateContainer within sandbox \"0cd29b31e04fbeb711829e0ece13956beb913e1038d7ae37a75cd952619bf544\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:29:41.005396 containerd[1545]: time="2026-01-23T19:29:41.000833226Z" level=info msg="Container a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:41.002258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182166406.mount: Deactivated successfully. Jan 23 19:29:41.085793 containerd[1545]: time="2026-01-23T19:29:41.085074884Z" level=info msg="CreateContainer within sandbox \"0cd29b31e04fbeb711829e0ece13956beb913e1038d7ae37a75cd952619bf544\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5\"" Jan 23 19:29:41.089175 containerd[1545]: time="2026-01-23T19:29:41.088633657Z" level=info msg="StartContainer for \"a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5\"" Jan 23 19:29:41.096697 containerd[1545]: time="2026-01-23T19:29:41.096243558Z" level=info msg="connecting to shim a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5" address="unix:///run/containerd/s/8cca70074c9412a7e37300b02bbaa64a262ddd6ab62f950f9d696df5a4233df7" protocol=ttrpc version=3 Jan 23 19:29:41.289817 kubelet[2485]: E0123 19:29:41.288137 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:29:41.413175 systemd[1]: Started cri-containerd-a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5.scope - libcontainer container a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5. Jan 23 19:29:41.492828 containerd[1545]: time="2026-01-23T19:29:41.492772199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"42042de4484cffc49f2db0548e4be2c8df44b31facb4ddc4a6c248aa249cea95\"" Jan 23 19:29:42.227130 containerd[1545]: time="2026-01-23T19:29:42.226717965Z" level=info msg="CreateContainer within sandbox \"42042de4484cffc49f2db0548e4be2c8df44b31facb4ddc4a6c248aa249cea95\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:29:42.447147 kubelet[2485]: E0123 19:29:42.445274 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="6.4s" Jan 23 19:29:42.569271 containerd[1545]: time="2026-01-23T19:29:42.563448217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f160fcaa67625e2ae9b60ab30643b3e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7246c94ea30a27601a8f025555a43d94a39b754628e9abcedc4fb666fdb6622\"" Jan 23 19:29:42.749971 containerd[1545]: time="2026-01-23T19:29:42.748089435Z" level=info msg="CreateContainer within sandbox \"e7246c94ea30a27601a8f025555a43d94a39b754628e9abcedc4fb666fdb6622\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:29:42.753830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910489172.mount: Deactivated successfully. Jan 23 19:29:42.769190 containerd[1545]: time="2026-01-23T19:29:42.763187017Z" level=info msg="Container 7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:42.863272 containerd[1545]: time="2026-01-23T19:29:42.861369933Z" level=info msg="CreateContainer within sandbox \"42042de4484cffc49f2db0548e4be2c8df44b31facb4ddc4a6c248aa249cea95\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70\"" Jan 23 19:29:42.870021 containerd[1545]: time="2026-01-23T19:29:42.868692657Z" level=info msg="StartContainer for \"7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70\"" Jan 23 19:29:42.874383 containerd[1545]: time="2026-01-23T19:29:42.874346644Z" level=info msg="connecting to shim 7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70" address="unix:///run/containerd/s/a41aa170eeb85e8ca21d32af72ec3afd3f38263dd50093bef67e08cc8b0c1d1b" protocol=ttrpc version=3 Jan 23 19:29:42.896604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277421022.mount: Deactivated successfully. Jan 23 19:29:42.943048 containerd[1545]: time="2026-01-23T19:29:42.942389068Z" level=info msg="Container 0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:43.184597 containerd[1545]: time="2026-01-23T19:29:43.184448380Z" level=info msg="CreateContainer within sandbox \"e7246c94ea30a27601a8f025555a43d94a39b754628e9abcedc4fb666fdb6622\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01\"" Jan 23 19:29:43.195985 containerd[1545]: time="2026-01-23T19:29:43.193452137Z" level=info msg="StartContainer for \"0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01\"" Jan 23 19:29:43.213299 containerd[1545]: time="2026-01-23T19:29:43.213240233Z" level=info msg="connecting to shim 0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01" address="unix:///run/containerd/s/4977d4e75f401c2ae5dd799b3bbb1e392aad4c9808d9448d9b9245671c547372" protocol=ttrpc version=3 Jan 23 19:29:43.314829 systemd[1]: Started cri-containerd-7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70.scope - libcontainer container 7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70. Jan 23 19:29:43.481175 containerd[1545]: time="2026-01-23T19:29:43.480800525Z" level=info msg="StartContainer for \"a42ea994b0cf3047592d463b331700549e20abb7f8843de1a26523e883312cd5\" returns successfully" Jan 23 19:29:43.636355 systemd[1]: Started cri-containerd-0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01.scope - libcontainer container 0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01. Jan 23 19:29:43.801242 kubelet[2485]: E0123 19:29:43.800775 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:44.063761 kubelet[2485]: E0123 19:29:44.054753 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:29:44.067153 kubelet[2485]: I0123 19:29:44.067124 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:44.067739 kubelet[2485]: E0123 19:29:44.067710 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Jan 23 19:29:44.700288 kubelet[2485]: E0123 19:29:44.699815 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:29:44.974828 containerd[1545]: time="2026-01-23T19:29:44.965738524Z" level=info msg="StartContainer for \"7b29984a1702ec54f5244c10301fb615e6234fe2e4e8d78d4b59c698197acc70\" returns successfully" Jan 23 19:29:45.049494 kubelet[2485]: E0123 19:29:45.047333 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:45.057068 containerd[1545]: time="2026-01-23T19:29:45.048200960Z" level=info msg="StartContainer for \"0f87de6b7931d8eae5a8459c2af740ab1b7c83a7652f541d6cbb65b85ffb9b01\" returns successfully" Jan 23 19:29:45.451817 kubelet[2485]: E0123 19:29:45.451341 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:29:46.123191 kubelet[2485]: E0123 19:29:46.121335 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:46.130089 kubelet[2485]: E0123 19:29:46.128323 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:46.142661 kubelet[2485]: E0123 19:29:46.142180 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:47.050139 kubelet[2485]: E0123 19:29:47.048146 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:29:47.098430 kubelet[2485]: E0123 19:29:47.098380 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:47.107360 kubelet[2485]: E0123 19:29:47.100060 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:48.110227 kubelet[2485]: E0123 19:29:48.110103 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:49.221652 kubelet[2485]: E0123 19:29:49.221294 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:50.369364 kubelet[2485]: E0123 19:29:50.366307 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:29:50.379680 kubelet[2485]: E0123 19:29:50.369652 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:50.493832 kubelet[2485]: I0123 19:29:50.493258 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:56.667258 kubelet[2485]: E0123 19:29:56.666708 2485 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188d72e96b496f0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:29:35.479394063 +0000 UTC m=+3.803640694,LastTimestamp:2026-01-23 19:29:35.479394063 +0000 UTC m=+3.803640694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:29:57.057639 kubelet[2485]: E0123 19:29:57.053616 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:29:57.206488 kubelet[2485]: E0123 19:29:57.205652 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:29:58.879706 kubelet[2485]: E0123 19:29:58.865157 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 23 19:29:59.187664 kubelet[2485]: E0123 19:29:59.176464 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:30:00.218513 kubelet[2485]: E0123 19:30:00.216012 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:30:00.226212 kubelet[2485]: E0123 19:30:00.225188 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:00.559218 kubelet[2485]: E0123 19:30:00.555425 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 19:30:03.156495 kubelet[2485]: E0123 19:30:03.156141 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:30:05.317135 kubelet[2485]: E0123 19:30:05.315998 2485 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:30:05.629034 kubelet[2485]: I0123 19:30:05.627672 2485 apiserver.go:52] "Watching apiserver" Jan 23 19:30:05.757362 kubelet[2485]: I0123 19:30:05.756418 2485 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:30:05.914384 kubelet[2485]: E0123 19:30:05.914160 2485 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 19:30:05.944831 kubelet[2485]: E0123 19:30:05.943648 2485 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:30:05.944831 kubelet[2485]: E0123 19:30:05.944396 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:06.252276 kubelet[2485]: E0123 19:30:06.251678 2485 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:30:06.843283 kubelet[2485]: E0123 19:30:06.843086 2485 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:30:07.062245 kubelet[2485]: E0123 19:30:07.061416 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:30:07.494401 kubelet[2485]: E0123 19:30:07.491188 2485 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:30:07.567153 kubelet[2485]: I0123 19:30:07.566511 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:30:07.629064 kubelet[2485]: I0123 19:30:07.628468 2485 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:30:07.629064 kubelet[2485]: E0123 19:30:07.628530 2485 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 19:30:07.654380 kubelet[2485]: I0123 19:30:07.652212 2485 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:30:08.228358 kubelet[2485]: I0123 19:30:08.228240 2485 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:08.236214 kubelet[2485]: E0123 19:30:08.236180 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:08.257046 kubelet[2485]: E0123 19:30:08.256647 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:08.279415 kubelet[2485]: I0123 19:30:08.279198 2485 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:30:08.327815 kubelet[2485]: E0123 19:30:08.324328 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:13.568082 systemd[1]: Reload requested from client PID 2776 ('systemctl') (unit session-7.scope)... Jan 23 19:30:13.568656 systemd[1]: Reloading... Jan 23 19:30:13.997160 zram_generator::config[2819]: No configuration found. Jan 23 19:30:14.693239 systemd[1]: Reloading finished in 1123 ms. Jan 23 19:30:14.769658 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:30:14.808569 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:30:14.810270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:30:14.810609 systemd[1]: kubelet.service: Consumed 10.813s CPU time, 137.8M memory peak. Jan 23 19:30:14.827977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:30:15.472246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:30:15.496815 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:30:15.734978 kubelet[2863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:30:15.734978 kubelet[2863]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:30:15.734978 kubelet[2863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:30:15.734978 kubelet[2863]: I0123 19:30:15.730279 2863 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:30:15.756788 kubelet[2863]: I0123 19:30:15.755118 2863 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:30:15.756788 kubelet[2863]: I0123 19:30:15.755218 2863 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:30:15.756788 kubelet[2863]: I0123 19:30:15.755600 2863 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:30:15.759626 kubelet[2863]: I0123 19:30:15.757824 2863 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 19:30:15.767166 kubelet[2863]: I0123 19:30:15.765769 2863 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:30:15.813478 kubelet[2863]: I0123 19:30:15.813379 2863 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:30:15.835972 kubelet[2863]: I0123 19:30:15.834954 2863 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:30:15.835972 kubelet[2863]: I0123 19:30:15.835348 2863 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:30:15.835972 kubelet[2863]: I0123 19:30:15.835380 2863 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:30:15.835972 kubelet[2863]: I0123 19:30:15.835672 2863 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:30:15.836346 kubelet[2863]: I0123 19:30:15.835685 2863 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:30:15.836346 kubelet[2863]: I0123 19:30:15.835756 2863 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:30:15.838942 kubelet[2863]: I0123 19:30:15.837977 2863 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:30:15.838942 kubelet[2863]: I0123 19:30:15.838164 2863 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:30:15.838942 kubelet[2863]: I0123 19:30:15.838210 2863 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:30:15.838942 kubelet[2863]: I0123 19:30:15.838292 2863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:30:15.842784 kubelet[2863]: I0123 19:30:15.842760 2863 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:30:15.846812 kubelet[2863]: I0123 19:30:15.846789 2863 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:30:15.858736 kubelet[2863]: I0123 19:30:15.858711 2863 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:30:15.859132 kubelet[2863]: I0123 19:30:15.859113 2863 server.go:1289] "Started kubelet" Jan 23 19:30:15.867036 kubelet[2863]: I0123 19:30:15.866187 2863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:30:15.869334 kubelet[2863]: I0123 19:30:15.868678 2863 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:30:15.870262 kubelet[2863]: I0123 19:30:15.870094 2863 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:30:15.870602 kubelet[2863]: I0123 19:30:15.870389 2863 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:30:15.877263 kubelet[2863]: I0123 19:30:15.874680 2863 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:30:15.878220 kubelet[2863]: I0123 19:30:15.877597 2863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:30:15.880046 kubelet[2863]: I0123 19:30:15.879683 2863 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:30:15.889823 kubelet[2863]: I0123 19:30:15.889410 2863 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:30:15.899199 kubelet[2863]: I0123 19:30:15.898713 2863 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:30:15.907682 kubelet[2863]: E0123 19:30:15.905727 2863 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:30:15.943005 kubelet[2863]: I0123 19:30:15.941700 2863 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:30:15.943005 kubelet[2863]: I0123 19:30:15.941740 2863 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:30:15.943005 kubelet[2863]: I0123 19:30:15.942016 2863 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:30:16.124810 kubelet[2863]: I0123 19:30:16.124005 2863 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:30:16.160023 kubelet[2863]: I0123 19:30:16.158282 2863 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:30:16.162411 kubelet[2863]: I0123 19:30:16.162376 2863 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:30:16.162547 kubelet[2863]: I0123 19:30:16.162441 2863 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:30:16.162547 kubelet[2863]: I0123 19:30:16.162465 2863 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:30:16.163981 kubelet[2863]: E0123 19:30:16.163747 2863 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:30:16.264091 kubelet[2863]: E0123 19:30:16.264045 2863 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:30:16.468780 kubelet[2863]: E0123 19:30:16.465656 2863 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:30:16.469953 kubelet[2863]: I0123 19:30:16.469768 2863 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:30:16.469953 kubelet[2863]: I0123 19:30:16.469792 2863 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:30:16.469953 kubelet[2863]: I0123 19:30:16.469830 2863 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:30:16.470323 kubelet[2863]: I0123 19:30:16.470304 2863 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:30:16.472995 kubelet[2863]: I0123 19:30:16.472192 2863 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:30:16.472995 kubelet[2863]: I0123 19:30:16.472236 2863 policy_none.go:49] "None policy: Start" Jan 23 19:30:16.472995 kubelet[2863]: I0123 19:30:16.472251 2863 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:30:16.472995 kubelet[2863]: I0123 19:30:16.472271 2863 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:30:16.475242 kubelet[2863]: I0123 19:30:16.475163 2863 state_mem.go:75] "Updated machine memory state" Jan 23 19:30:16.512282 kubelet[2863]: E0123 19:30:16.512026 2863 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:30:16.513379 kubelet[2863]: I0123 19:30:16.513171 2863 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:30:16.513379 kubelet[2863]: I0123 19:30:16.513237 2863 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:30:16.514600 kubelet[2863]: I0123 19:30:16.513814 2863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:30:16.523067 kubelet[2863]: E0123 19:30:16.522800 2863 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:30:16.729139 kubelet[2863]: I0123 19:30:16.726675 2863 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:30:16.795286 kubelet[2863]: I0123 19:30:16.794436 2863 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 19:30:16.795286 kubelet[2863]: I0123 19:30:16.794669 2863 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:30:16.840094 kubelet[2863]: I0123 19:30:16.840063 2863 apiserver.go:52] "Watching apiserver" Jan 23 19:30:16.875091 kubelet[2863]: I0123 19:30:16.873170 2863 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:30:16.880786 kubelet[2863]: I0123 19:30:16.880674 2863 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:16.941724 kubelet[2863]: I0123 19:30:16.924413 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:17.012175 kubelet[2863]: I0123 19:30:16.985631 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:17.012175 kubelet[2863]: I0123 19:30:16.986336 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:17.012175 kubelet[2863]: I0123 19:30:16.986376 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f160fcaa67625e2ae9b60ab30643b3e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f160fcaa67625e2ae9b60ab30643b3e4\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:30:17.012175 kubelet[2863]: I0123 19:30:16.986403 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f160fcaa67625e2ae9b60ab30643b3e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f160fcaa67625e2ae9b60ab30643b3e4\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:30:17.012175 kubelet[2863]: I0123 19:30:16.986426 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f160fcaa67625e2ae9b60ab30643b3e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f160fcaa67625e2ae9b60ab30643b3e4\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:30:17.044775 kubelet[2863]: I0123 19:30:16.986450 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:17.044775 kubelet[2863]: I0123 19:30:16.986471 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:17.047344 kubelet[2863]: I0123 19:30:17.046704 2863 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:30:17.077249 kubelet[2863]: E0123 19:30:17.075469 2863 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:30:17.079341 kubelet[2863]: E0123 19:30:17.079219 2863 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 19:30:17.096005 kubelet[2863]: I0123 19:30:17.095958 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:30:17.336556 kubelet[2863]: I0123 19:30:17.294271 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.248243405 podStartE2EDuration="10.248243405s" podCreationTimestamp="2026-01-23 19:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:17.247035608 +0000 UTC m=+1.695938819" watchObservedRunningTime="2026-01-23 19:30:17.248243405 +0000 UTC m=+1.697146596" Jan 23 19:30:17.385256 kubelet[2863]: E0123 19:30:17.384241 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:17.387261 kubelet[2863]: E0123 19:30:17.387235 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:17.456051 kubelet[2863]: I0123 19:30:17.455035 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.455015492 podStartE2EDuration="9.455015492s" podCreationTimestamp="2026-01-23 19:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:17.454473985 +0000 UTC m=+1.903377196" watchObservedRunningTime="2026-01-23 19:30:17.455015492 +0000 UTC m=+1.903918704" Jan 23 19:30:17.485258 kubelet[2863]: E0123 19:30:17.481965 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:17.707055 kubelet[2863]: I0123 19:30:17.706186 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=9.702838384 podStartE2EDuration="9.702838384s" podCreationTimestamp="2026-01-23 19:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:17.70233398 +0000 UTC m=+2.151237172" watchObservedRunningTime="2026-01-23 19:30:17.702838384 +0000 UTC m=+2.151741575" Jan 23 19:30:18.262095 kubelet[2863]: I0123 19:30:18.261578 2863 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:30:18.280956 containerd[1545]: time="2026-01-23T19:30:18.280758294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:30:18.294323 kubelet[2863]: I0123 19:30:18.291138 2863 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:30:18.377261 kubelet[2863]: E0123 19:30:18.376757 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:18.384077 kubelet[2863]: E0123 19:30:18.384036 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:18.390694 kubelet[2863]: E0123 19:30:18.389437 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:18.830332 systemd[1]: Created slice kubepods-besteffort-pod1f20bd0e_8618_42db_bb02_180e7fac75d0.slice - libcontainer container kubepods-besteffort-pod1f20bd0e_8618_42db_bb02_180e7fac75d0.slice. Jan 23 19:30:18.894165 kubelet[2863]: I0123 19:30:18.893949 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f20bd0e-8618-42db-bb02-180e7fac75d0-xtables-lock\") pod \"kube-proxy-tlkpx\" (UID: \"1f20bd0e-8618-42db-bb02-180e7fac75d0\") " pod="kube-system/kube-proxy-tlkpx" Jan 23 19:30:18.898232 kubelet[2863]: I0123 19:30:18.898159 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd47z\" (UniqueName: \"kubernetes.io/projected/1f20bd0e-8618-42db-bb02-180e7fac75d0-kube-api-access-gd47z\") pod \"kube-proxy-tlkpx\" (UID: \"1f20bd0e-8618-42db-bb02-180e7fac75d0\") " pod="kube-system/kube-proxy-tlkpx" Jan 23 19:30:18.898406 kubelet[2863]: I0123 19:30:18.898386 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f20bd0e-8618-42db-bb02-180e7fac75d0-kube-proxy\") pod \"kube-proxy-tlkpx\" (UID: \"1f20bd0e-8618-42db-bb02-180e7fac75d0\") " pod="kube-system/kube-proxy-tlkpx" Jan 23 19:30:18.898564 kubelet[2863]: I0123 19:30:18.898542 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f20bd0e-8618-42db-bb02-180e7fac75d0-lib-modules\") pod \"kube-proxy-tlkpx\" (UID: \"1f20bd0e-8618-42db-bb02-180e7fac75d0\") " pod="kube-system/kube-proxy-tlkpx" Jan 23 19:30:19.187051 kubelet[2863]: E0123 19:30:19.186589 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:19.301365 containerd[1545]: time="2026-01-23T19:30:19.300370496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlkpx,Uid:1f20bd0e-8618-42db-bb02-180e7fac75d0,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:19.408431 kubelet[2863]: E0123 19:30:19.407364 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:19.410653 kubelet[2863]: E0123 19:30:19.410413 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:19.486426 containerd[1545]: time="2026-01-23T19:30:19.485761985Z" level=info msg="connecting to shim 84b33f17eeb0ab2e95401cef6a3132a0894206aa68d1cdd2296b4d41df9ffde9" address="unix:///run/containerd/s/9a4e5c873b04786411761741b5fdafc41a1ab0fb03c8eb7dc2086a11ca6344f0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:19.732820 systemd[1]: Created slice kubepods-burstable-pod7ef5c4a9_2071_4b3b_bb00_062949f7ad61.slice - libcontainer container kubepods-burstable-pod7ef5c4a9_2071_4b3b_bb00_062949f7ad61.slice. Jan 23 19:30:19.763723 systemd[1]: Started cri-containerd-84b33f17eeb0ab2e95401cef6a3132a0894206aa68d1cdd2296b4d41df9ffde9.scope - libcontainer container 84b33f17eeb0ab2e95401cef6a3132a0894206aa68d1cdd2296b4d41df9ffde9. Jan 23 19:30:19.831306 kubelet[2863]: I0123 19:30:19.831179 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef5c4a9-2071-4b3b-bb00-062949f7ad61-xtables-lock\") pod \"kube-flannel-ds-2qgzk\" (UID: \"7ef5c4a9-2071-4b3b-bb00-062949f7ad61\") " pod="kube-flannel/kube-flannel-ds-2qgzk" Jan 23 19:30:19.831306 kubelet[2863]: I0123 19:30:19.831298 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/7ef5c4a9-2071-4b3b-bb00-062949f7ad61-cni\") pod \"kube-flannel-ds-2qgzk\" (UID: \"7ef5c4a9-2071-4b3b-bb00-062949f7ad61\") " pod="kube-flannel/kube-flannel-ds-2qgzk" Jan 23 19:30:19.831626 kubelet[2863]: I0123 19:30:19.831329 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7ef5c4a9-2071-4b3b-bb00-062949f7ad61-run\") pod \"kube-flannel-ds-2qgzk\" (UID: \"7ef5c4a9-2071-4b3b-bb00-062949f7ad61\") " pod="kube-flannel/kube-flannel-ds-2qgzk" Jan 23 19:30:19.831626 kubelet[2863]: I0123 19:30:19.831371 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/7ef5c4a9-2071-4b3b-bb00-062949f7ad61-flannel-cfg\") pod \"kube-flannel-ds-2qgzk\" (UID: \"7ef5c4a9-2071-4b3b-bb00-062949f7ad61\") " pod="kube-flannel/kube-flannel-ds-2qgzk" Jan 23 19:30:19.831626 kubelet[2863]: I0123 19:30:19.831401 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8pf\" (UniqueName: \"kubernetes.io/projected/7ef5c4a9-2071-4b3b-bb00-062949f7ad61-kube-api-access-ps8pf\") pod \"kube-flannel-ds-2qgzk\" (UID: \"7ef5c4a9-2071-4b3b-bb00-062949f7ad61\") " pod="kube-flannel/kube-flannel-ds-2qgzk" Jan 23 19:30:19.831626 kubelet[2863]: I0123 19:30:19.831429 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/7ef5c4a9-2071-4b3b-bb00-062949f7ad61-cni-plugin\") pod \"kube-flannel-ds-2qgzk\" (UID: \"7ef5c4a9-2071-4b3b-bb00-062949f7ad61\") " pod="kube-flannel/kube-flannel-ds-2qgzk" Jan 23 19:30:19.946342 containerd[1545]: time="2026-01-23T19:30:19.946205906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlkpx,Uid:1f20bd0e-8618-42db-bb02-180e7fac75d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"84b33f17eeb0ab2e95401cef6a3132a0894206aa68d1cdd2296b4d41df9ffde9\"" Jan 23 19:30:19.950973 kubelet[2863]: E0123 19:30:19.948362 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:20.010960 containerd[1545]: time="2026-01-23T19:30:20.010418007Z" level=info msg="CreateContainer within sandbox \"84b33f17eeb0ab2e95401cef6a3132a0894206aa68d1cdd2296b4d41df9ffde9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:30:20.048085 kubelet[2863]: E0123 19:30:20.047612 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:20.052680 containerd[1545]: time="2026-01-23T19:30:20.052397100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2qgzk,Uid:7ef5c4a9-2071-4b3b-bb00-062949f7ad61,Namespace:kube-flannel,Attempt:0,}" Jan 23 19:30:20.085182 containerd[1545]: time="2026-01-23T19:30:20.085122471Z" level=info msg="Container 4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:20.132979 containerd[1545]: time="2026-01-23T19:30:20.132768047Z" level=info msg="CreateContainer within sandbox \"84b33f17eeb0ab2e95401cef6a3132a0894206aa68d1cdd2296b4d41df9ffde9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4\"" Jan 23 19:30:20.140949 containerd[1545]: time="2026-01-23T19:30:20.140447127Z" level=info msg="StartContainer for \"4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4\"" Jan 23 19:30:20.162059 containerd[1545]: time="2026-01-23T19:30:20.161358388Z" level=info msg="connecting to shim 4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4" address="unix:///run/containerd/s/9a4e5c873b04786411761741b5fdafc41a1ab0fb03c8eb7dc2086a11ca6344f0" protocol=ttrpc version=3 Jan 23 19:30:20.182433 containerd[1545]: time="2026-01-23T19:30:20.182218925Z" level=info msg="connecting to shim 7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606" address="unix:///run/containerd/s/1987d5546b9bad7d5d8b437fb8a127a81e605814bb07626bce6f138a7f6e982a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:20.198050 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 23 19:30:20.219594 sshd[1724]: Connection closed by 10.0.0.1 port 59416 Jan 23 19:30:20.219036 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:20.330402 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:59416.service: Deactivated successfully. Jan 23 19:30:20.361701 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:30:20.362612 systemd[1]: session-7.scope: Consumed 16.572s CPU time, 222.6M memory peak. Jan 23 19:30:20.368222 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:30:20.418062 systemd[1]: Started cri-containerd-4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4.scope - libcontainer container 4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4. Jan 23 19:30:20.420704 systemd[1]: Started cri-containerd-7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606.scope - libcontainer container 7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606. Jan 23 19:30:20.428318 systemd-logind[1526]: Removed session 7. Jan 23 19:30:20.673687 containerd[1545]: time="2026-01-23T19:30:20.673129597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2qgzk,Uid:7ef5c4a9-2071-4b3b-bb00-062949f7ad61,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\"" Jan 23 19:30:20.680309 kubelet[2863]: E0123 19:30:20.678081 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:20.684786 containerd[1545]: time="2026-01-23T19:30:20.683784437Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 19:30:20.907195 containerd[1545]: time="2026-01-23T19:30:20.906951007Z" level=info msg="StartContainer for \"4103e2695069e765b36311de9b430e683b09b6e63e4fb5c4898f602d2129c5e4\" returns successfully" Jan 23 19:30:20.916959 kubelet[2863]: E0123 19:30:20.914169 2863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:21.631155 kubelet[2863]: I0123 19:30:21.630635 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tlkpx" podStartSLOduration=3.630606466 podStartE2EDuration="3.630606466s" podCreationTimestamp="2026-01-23 19:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:21.626316681 +0000 UTC m=+6.075219872" watchObservedRunningTime="2026-01-23 19:30:21.630606466 +0000 UTC m=+6.079509676" Jan 23 19:30:23.223769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215625148.mount: Deactivated successfully. Jan 23 19:30:23.842654 containerd[1545]: time="2026-01-23T19:30:23.842342402Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:23.853361 containerd[1545]: time="2026-01-23T19:30:23.851429585Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 23 19:30:23.855656 containerd[1545]: time="2026-01-23T19:30:23.854582049Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:23.871008 containerd[1545]: time="2026-01-23T19:30:23.868577673Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:23.871008 containerd[1545]: time="2026-01-23T19:30:23.868991406Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.185006148s" Jan 23 19:30:23.871008 containerd[1545]: time="2026-01-23T19:30:23.869032221Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 23 19:30:23.899268 containerd[1545]: time="2026-01-23T19:30:23.899207963Z" level=info msg="CreateContainer within sandbox \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 19:30:24.149076 containerd[1545]: time="2026-01-23T19:30:24.134620555Z" level=info msg="Container 24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:24.262104 containerd[1545]: time="2026-01-23T19:30:24.261744481Z" level=info msg="CreateContainer within sandbox \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272\"" Jan 23 19:30:24.358039 containerd[1545]: time="2026-01-23T19:30:24.350791946Z" level=info msg="StartContainer for \"24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272\"" Jan 23 19:30:24.372365 containerd[1545]: time="2026-01-23T19:30:24.372166490Z" level=info msg="connecting to shim 24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272" address="unix:///run/containerd/s/1987d5546b9bad7d5d8b437fb8a127a81e605814bb07626bce6f138a7f6e982a" protocol=ttrpc version=3 Jan 23 19:30:24.442798 systemd[1]: Started cri-containerd-24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272.scope - libcontainer container 24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272. Jan 23 19:30:24.661620 systemd[1]: cri-containerd-24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272.scope: Deactivated successfully. Jan 23 19:30:24.673690 containerd[1545]: time="2026-01-23T19:30:24.673521663Z" level=info msg="StartContainer for \"24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272\" returns successfully" Jan 23 19:30:24.683433 containerd[1545]: time="2026-01-23T19:30:24.683289537Z" level=info msg="received container exit event container_id:\"24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272\" id:\"24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272\" pid:3186 exited_at:{seconds:1769196624 nanos:681699774}" Jan 23 19:30:24.807286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c01787abcf328b6925bbb04f6542571bf08a03984a5f40bce45a8c9da51272-rootfs.mount: Deactivated successfully. Jan 23 19:30:25.623023 containerd[1545]: time="2026-01-23T19:30:25.622712103Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 19:30:30.038804 containerd[1545]: time="2026-01-23T19:30:30.038528236Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:30.043141 containerd[1545]: time="2026-01-23T19:30:30.042698336Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 23 19:30:30.045658 containerd[1545]: time="2026-01-23T19:30:30.045373479Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:30.056800 containerd[1545]: time="2026-01-23T19:30:30.056571197Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:30.063037 containerd[1545]: time="2026-01-23T19:30:30.062701424Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 4.439874257s" Jan 23 19:30:30.063037 containerd[1545]: time="2026-01-23T19:30:30.063036383Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 23 19:30:30.084488 containerd[1545]: time="2026-01-23T19:30:30.084306307Z" level=info msg="CreateContainer within sandbox \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 19:30:30.135646 containerd[1545]: time="2026-01-23T19:30:30.135251470Z" level=info msg="Container 72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:30.160107 containerd[1545]: time="2026-01-23T19:30:30.160049808Z" level=info msg="CreateContainer within sandbox \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c\"" Jan 23 19:30:30.167996 containerd[1545]: time="2026-01-23T19:30:30.166354992Z" level=info msg="StartContainer for \"72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c\"" Jan 23 19:30:30.178788 containerd[1545]: time="2026-01-23T19:30:30.178738560Z" level=info msg="connecting to shim 72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c" address="unix:///run/containerd/s/1987d5546b9bad7d5d8b437fb8a127a81e605814bb07626bce6f138a7f6e982a" protocol=ttrpc version=3 Jan 23 19:30:30.258325 systemd[1]: Started cri-containerd-72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c.scope - libcontainer container 72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c. Jan 23 19:30:30.378588 systemd[1]: cri-containerd-72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c.scope: Deactivated successfully. Jan 23 19:30:30.392374 containerd[1545]: time="2026-01-23T19:30:30.391288275Z" level=info msg="received container exit event container_id:\"72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c\" id:\"72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c\" pid:3295 exited_at:{seconds:1769196630 nanos:382564351}" Jan 23 19:30:30.397303 containerd[1545]: time="2026-01-23T19:30:30.397143944Z" level=info msg="StartContainer for \"72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c\" returns successfully" Jan 23 19:30:30.457016 kubelet[2863]: I0123 19:30:30.456965 2863 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:30:30.479622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72a2719e962facae145a747de20bd63ee7683199fb0dabdd76b1eedad176e59c-rootfs.mount: Deactivated successfully. Jan 23 19:30:30.627337 systemd[1]: Created slice kubepods-burstable-podf6361c37_8339_44eb_9665_8d04cea25e9b.slice - libcontainer container kubepods-burstable-podf6361c37_8339_44eb_9665_8d04cea25e9b.slice. Jan 23 19:30:30.647831 kubelet[2863]: I0123 19:30:30.645197 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6361c37-8339-44eb-9665-8d04cea25e9b-config-volume\") pod \"coredns-674b8bbfcf-8km9d\" (UID: \"f6361c37-8339-44eb-9665-8d04cea25e9b\") " pod="kube-system/coredns-674b8bbfcf-8km9d" Jan 23 19:30:30.647831 kubelet[2863]: I0123 19:30:30.645244 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp5k9\" (UniqueName: \"kubernetes.io/projected/f6361c37-8339-44eb-9665-8d04cea25e9b-kube-api-access-cp5k9\") pod \"coredns-674b8bbfcf-8km9d\" (UID: \"f6361c37-8339-44eb-9665-8d04cea25e9b\") " pod="kube-system/coredns-674b8bbfcf-8km9d" Jan 23 19:30:30.647831 kubelet[2863]: I0123 19:30:30.645538 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpq7w\" (UniqueName: \"kubernetes.io/projected/528845ef-8ccd-4f53-9fc1-5306dcc5646a-kube-api-access-mpq7w\") pod \"coredns-674b8bbfcf-6d5sh\" (UID: \"528845ef-8ccd-4f53-9fc1-5306dcc5646a\") " pod="kube-system/coredns-674b8bbfcf-6d5sh" Jan 23 19:30:30.647831 kubelet[2863]: I0123 19:30:30.645575 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528845ef-8ccd-4f53-9fc1-5306dcc5646a-config-volume\") pod \"coredns-674b8bbfcf-6d5sh\" (UID: \"528845ef-8ccd-4f53-9fc1-5306dcc5646a\") " pod="kube-system/coredns-674b8bbfcf-6d5sh" Jan 23 19:30:30.649793 systemd[1]: Created slice kubepods-burstable-pod528845ef_8ccd_4f53_9fc1_5306dcc5646a.slice - libcontainer container kubepods-burstable-pod528845ef_8ccd_4f53_9fc1_5306dcc5646a.slice. Jan 23 19:30:30.681686 containerd[1545]: time="2026-01-23T19:30:30.681386325Z" level=info msg="CreateContainer within sandbox \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 19:30:30.711144 containerd[1545]: time="2026-01-23T19:30:30.710500417Z" level=info msg="Container d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:30.738824 containerd[1545]: time="2026-01-23T19:30:30.738312393Z" level=info msg="CreateContainer within sandbox \"7a7fa38ef554c69d4fa28569b44594920aaa8988a26240ae2c5c3dc526e30606\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80\"" Jan 23 19:30:30.742523 containerd[1545]: time="2026-01-23T19:30:30.742400343Z" level=info msg="StartContainer for \"d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80\"" Jan 23 19:30:30.745889 containerd[1545]: time="2026-01-23T19:30:30.745655724Z" level=info msg="connecting to shim d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80" address="unix:///run/containerd/s/1987d5546b9bad7d5d8b437fb8a127a81e605814bb07626bce6f138a7f6e982a" protocol=ttrpc version=3 Jan 23 19:30:30.832306 systemd[1]: Started cri-containerd-d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80.scope - libcontainer container d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80. Jan 23 19:30:30.953539 containerd[1545]: time="2026-01-23T19:30:30.952752537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8km9d,Uid:f6361c37-8339-44eb-9665-8d04cea25e9b,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:30.960576 containerd[1545]: time="2026-01-23T19:30:30.960377457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6d5sh,Uid:528845ef-8ccd-4f53-9fc1-5306dcc5646a,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:30.972103 containerd[1545]: time="2026-01-23T19:30:30.972071784Z" level=info msg="StartContainer for \"d344f252a4d5248c88b4258aef5cb1692d6e07959295f85be0e57d96bc071a80\" returns successfully" Jan 23 19:30:31.111039 containerd[1545]: time="2026-01-23T19:30:31.110754143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6d5sh,Uid:528845ef-8ccd-4f53-9fc1-5306dcc5646a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"187160822d45ef6741992a1aeec55a4d1a63acac4b6e9795b69fd08a9f39fb83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:30:31.112722 kubelet[2863]: E0123 19:30:31.112103 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187160822d45ef6741992a1aeec55a4d1a63acac4b6e9795b69fd08a9f39fb83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:30:31.112722 kubelet[2863]: E0123 19:30:31.112196 2863 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187160822d45ef6741992a1aeec55a4d1a63acac4b6e9795b69fd08a9f39fb83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6d5sh" Jan 23 19:30:31.112722 kubelet[2863]: E0123 19:30:31.112282 2863 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187160822d45ef6741992a1aeec55a4d1a63acac4b6e9795b69fd08a9f39fb83\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6d5sh" Jan 23 19:30:31.112722 kubelet[2863]: E0123 19:30:31.112333 2863 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6d5sh_kube-system(528845ef-8ccd-4f53-9fc1-5306dcc5646a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6d5sh_kube-system(528845ef-8ccd-4f53-9fc1-5306dcc5646a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"187160822d45ef6741992a1aeec55a4d1a63acac4b6e9795b69fd08a9f39fb83\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-6d5sh" podUID="528845ef-8ccd-4f53-9fc1-5306dcc5646a" Jan 23 19:30:31.120812 containerd[1545]: time="2026-01-23T19:30:31.120627528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8km9d,Uid:f6361c37-8339-44eb-9665-8d04cea25e9b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2b34f46e23354daee8693baf11d15f15a0939193442f3148b3997262a80b87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:30:31.125613 kubelet[2863]: E0123 19:30:31.124014 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2b34f46e23354daee8693baf11d15f15a0939193442f3148b3997262a80b87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:30:31.125613 kubelet[2863]: E0123 19:30:31.127085 2863 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2b34f46e23354daee8693baf11d15f15a0939193442f3148b3997262a80b87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-8km9d" Jan 23 19:30:31.125613 kubelet[2863]: E0123 19:30:31.127829 2863 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2b34f46e23354daee8693baf11d15f15a0939193442f3148b3997262a80b87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-8km9d" Jan 23 19:30:31.125613 kubelet[2863]: E0123 19:30:31.128984 2863 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8km9d_kube-system(f6361c37-8339-44eb-9665-8d04cea25e9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8km9d_kube-system(f6361c37-8339-44eb-9665-8d04cea25e9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e2b34f46e23354daee8693baf11d15f15a0939193442f3148b3997262a80b87\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-8km9d" podUID="f6361c37-8339-44eb-9665-8d04cea25e9b" Jan 23 19:30:32.227837 systemd-networkd[1458]: flannel.1: Link UP Jan 23 19:30:32.229642 systemd-networkd[1458]: flannel.1: Gained carrier Jan 23 19:30:33.579782 systemd-networkd[1458]: flannel.1: Gained IPv6LL Jan 23 19:30:44.175796 containerd[1545]: time="2026-01-23T19:30:44.175156996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6d5sh,Uid:528845ef-8ccd-4f53-9fc1-5306dcc5646a,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:44.285832 systemd-networkd[1458]: cni0: Link UP Jan 23 19:30:44.286830 systemd-networkd[1458]: cni0: Gained carrier Jan 23 19:30:44.298098 systemd-networkd[1458]: cni0: Lost carrier Jan 23 19:30:44.356753 systemd-networkd[1458]: vethffe79d46: Link UP Jan 23 19:30:44.380999 kernel: cni0: port 1(vethffe79d46) entered blocking state Jan 23 19:30:44.381151 kernel: cni0: port 1(vethffe79d46) entered disabled state Jan 23 19:30:44.381186 kernel: vethffe79d46: entered allmulticast mode Jan 23 19:30:44.392449 kernel: vethffe79d46: entered promiscuous mode Jan 23 19:30:44.474345 kernel: cni0: port 1(vethffe79d46) entered blocking state Jan 23 19:30:44.474636 kernel: cni0: port 1(vethffe79d46) entered forwarding state Jan 23 19:30:44.480251 systemd-networkd[1458]: vethffe79d46: Gained carrier Jan 23 19:30:44.486800 systemd-networkd[1458]: cni0: Gained carrier Jan 23 19:30:44.498095 containerd[1545]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 23 19:30:44.498095 containerd[1545]: delegateAdd: netconf sent to delegate plugin: Jan 23 19:30:44.613297 containerd[1545]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T19:30:44.613017094Z" level=info msg="connecting to shim 3971b06aec59629da80847811efb9e43758e35298077cb21aeb2dcec06a22732" address="unix:///run/containerd/s/ddac4f62ed96d558f8aa960a5a5619b117258c79fbb6ecb49cf1b8cc1fcc9d0d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:44.732064 systemd[1]: Started cri-containerd-3971b06aec59629da80847811efb9e43758e35298077cb21aeb2dcec06a22732.scope - libcontainer container 3971b06aec59629da80847811efb9e43758e35298077cb21aeb2dcec06a22732. Jan 23 19:30:44.808679 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:45.021815 containerd[1545]: time="2026-01-23T19:30:45.017799585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6d5sh,Uid:528845ef-8ccd-4f53-9fc1-5306dcc5646a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3971b06aec59629da80847811efb9e43758e35298077cb21aeb2dcec06a22732\"" Jan 23 19:30:45.047998 containerd[1545]: time="2026-01-23T19:30:45.047597815Z" level=info msg="CreateContainer within sandbox \"3971b06aec59629da80847811efb9e43758e35298077cb21aeb2dcec06a22732\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:30:45.105013 containerd[1545]: time="2026-01-23T19:30:45.103736886Z" level=info msg="Container a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:45.130052 containerd[1545]: time="2026-01-23T19:30:45.129778445Z" level=info msg="CreateContainer within sandbox \"3971b06aec59629da80847811efb9e43758e35298077cb21aeb2dcec06a22732\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8\"" Jan 23 19:30:45.146016 containerd[1545]: time="2026-01-23T19:30:45.141758719Z" level=info msg="StartContainer for \"a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8\"" Jan 23 19:30:45.146016 containerd[1545]: time="2026-01-23T19:30:45.145614209Z" level=info msg="connecting to shim a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8" address="unix:///run/containerd/s/ddac4f62ed96d558f8aa960a5a5619b117258c79fbb6ecb49cf1b8cc1fcc9d0d" protocol=ttrpc version=3 Jan 23 19:30:45.238702 systemd[1]: Started cri-containerd-a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8.scope - libcontainer container a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8. Jan 23 19:30:45.436137 containerd[1545]: time="2026-01-23T19:30:45.433308618Z" level=info msg="StartContainer for \"a2de9aceb5febd98ffe3fcde47454606e5e615599dfa69288a9375b4bce9b6f8\" returns successfully" Jan 23 19:30:45.864740 kubelet[2863]: I0123 19:30:45.863681 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-2qgzk" podStartSLOduration=17.478793444 podStartE2EDuration="26.863656005s" podCreationTimestamp="2026-01-23 19:30:19 +0000 UTC" firstStartedPulling="2026-01-23 19:30:20.683376662 +0000 UTC m=+5.132279863" lastFinishedPulling="2026-01-23 19:30:30.068239233 +0000 UTC m=+14.517142424" observedRunningTime="2026-01-23 19:30:31.771819032 +0000 UTC m=+16.220722223" watchObservedRunningTime="2026-01-23 19:30:45.863656005 +0000 UTC m=+30.312559196" Jan 23 19:30:45.916044 kubelet[2863]: I0123 19:30:45.914251 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6d5sh" podStartSLOduration=27.914231444 podStartE2EDuration="27.914231444s" podCreationTimestamp="2026-01-23 19:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:45.870295796 +0000 UTC m=+30.319199037" watchObservedRunningTime="2026-01-23 19:30:45.914231444 +0000 UTC m=+30.363134634" Jan 23 19:30:46.169489 containerd[1545]: time="2026-01-23T19:30:46.168769678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8km9d,Uid:f6361c37-8339-44eb-9665-8d04cea25e9b,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:46.187246 systemd-networkd[1458]: cni0: Gained IPv6LL Jan 23 19:30:46.228507 systemd-networkd[1458]: veth20e16bc7: Link UP Jan 23 19:30:46.252256 kernel: cni0: port 2(veth20e16bc7) entered blocking state Jan 23 19:30:46.252464 kernel: cni0: port 2(veth20e16bc7) entered disabled state Jan 23 19:30:46.261615 kernel: veth20e16bc7: entered allmulticast mode Jan 23 19:30:46.262290 kernel: veth20e16bc7: entered promiscuous mode Jan 23 19:30:46.324979 kernel: cni0: port 2(veth20e16bc7) entered blocking state Jan 23 19:30:46.325092 kernel: cni0: port 2(veth20e16bc7) entered forwarding state Jan 23 19:30:46.325321 systemd-networkd[1458]: veth20e16bc7: Gained carrier Jan 23 19:30:46.351678 containerd[1545]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 23 19:30:46.351678 containerd[1545]: delegateAdd: netconf sent to delegate plugin: Jan 23 19:30:46.507160 systemd-networkd[1458]: vethffe79d46: Gained IPv6LL Jan 23 19:30:46.544083 containerd[1545]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T19:30:46.543624287Z" level=info msg="connecting to shim 8982d397990f09abb46e74283c0f908a30cdd6c837d5b0bfab68f84346ca2c34" address="unix:///run/containerd/s/d859db0975e8a8058e41dafcca973c1a787ac2f30719d759075bda1c3930e713" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:46.721677 systemd[1]: Started cri-containerd-8982d397990f09abb46e74283c0f908a30cdd6c837d5b0bfab68f84346ca2c34.scope - libcontainer container 8982d397990f09abb46e74283c0f908a30cdd6c837d5b0bfab68f84346ca2c34. Jan 23 19:30:46.821973 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:47.008757 containerd[1545]: time="2026-01-23T19:30:47.008200618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8km9d,Uid:f6361c37-8339-44eb-9665-8d04cea25e9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8982d397990f09abb46e74283c0f908a30cdd6c837d5b0bfab68f84346ca2c34\"" Jan 23 19:30:47.062566 containerd[1545]: time="2026-01-23T19:30:47.061662776Z" level=info msg="CreateContainer within sandbox \"8982d397990f09abb46e74283c0f908a30cdd6c837d5b0bfab68f84346ca2c34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:30:47.112816 containerd[1545]: time="2026-01-23T19:30:47.109197617Z" level=info msg="Container eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:47.161048 containerd[1545]: time="2026-01-23T19:30:47.160680541Z" level=info msg="CreateContainer within sandbox \"8982d397990f09abb46e74283c0f908a30cdd6c837d5b0bfab68f84346ca2c34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c\"" Jan 23 19:30:47.171264 containerd[1545]: time="2026-01-23T19:30:47.171214974Z" level=info msg="StartContainer for \"eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c\"" Jan 23 19:30:47.179470 containerd[1545]: time="2026-01-23T19:30:47.179246770Z" level=info msg="connecting to shim eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c" address="unix:///run/containerd/s/d859db0975e8a8058e41dafcca973c1a787ac2f30719d759075bda1c3930e713" protocol=ttrpc version=3 Jan 23 19:30:47.361667 systemd[1]: Started cri-containerd-eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c.scope - libcontainer container eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c. Jan 23 19:30:47.477198 systemd-networkd[1458]: veth20e16bc7: Gained IPv6LL Jan 23 19:30:47.589762 containerd[1545]: time="2026-01-23T19:30:47.589675929Z" level=info msg="StartContainer for \"eca70b5ba8edc316d7dedc329f78b8f483c33f1518200604b545dd75c12d8b4c\" returns successfully" Jan 23 19:30:47.987505 kubelet[2863]: I0123 19:30:47.986158 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8km9d" podStartSLOduration=29.986120282999998 podStartE2EDuration="29.986120283s" podCreationTimestamp="2026-01-23 19:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:47.91374011 +0000 UTC m=+32.362643340" watchObservedRunningTime="2026-01-23 19:30:47.986120283 +0000 UTC m=+32.435023504" Jan 23 19:31:00.347616 kubelet[2863]: E0123 19:31:00.338743 2863 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.173s" Jan 23 19:31:53.991488 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:45802.service - OpenSSH per-connection server daemon (10.0.0.1:45802). Jan 23 19:31:54.224801 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 45802 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:54.239697 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:54.262597 systemd-logind[1526]: New session 8 of user core. Jan 23 19:31:54.294390 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:31:54.943641 sshd[4037]: Connection closed by 10.0.0.1 port 45802 Jan 23 19:31:54.947439 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:54.991063 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:45802.service: Deactivated successfully. Jan 23 19:31:55.017551 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:31:55.030463 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:31:55.036108 systemd-logind[1526]: Removed session 8. Jan 23 19:32:00.021249 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:53194.service - OpenSSH per-connection server daemon (10.0.0.1:53194). Jan 23 19:32:00.294795 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 53194 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:00.300717 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:00.339094 systemd-logind[1526]: New session 9 of user core. Jan 23 19:32:00.367583 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:32:00.971469 sshd[4075]: Connection closed by 10.0.0.1 port 53194 Jan 23 19:32:00.974680 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:00.991385 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:53194.service: Deactivated successfully. Jan 23 19:32:01.001321 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:32:01.017745 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:32:01.028703 systemd-logind[1526]: Removed session 9. Jan 23 19:32:06.048791 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:36048.service - OpenSSH per-connection server daemon (10.0.0.1:36048). Jan 23 19:32:06.355121 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 36048 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:06.377521 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:06.462344 systemd-logind[1526]: New session 10 of user core. Jan 23 19:32:06.490350 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:32:07.343684 sshd[4115]: Connection closed by 10.0.0.1 port 36048 Jan 23 19:32:07.347771 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:07.369578 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:32:07.373526 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:36048.service: Deactivated successfully. Jan 23 19:32:07.382793 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:32:07.429298 systemd-logind[1526]: Removed session 10. Jan 23 19:32:12.462408 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:36064.service - OpenSSH per-connection server daemon (10.0.0.1:36064). Jan 23 19:32:12.818342 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 36064 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:12.820740 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:12.856362 systemd-logind[1526]: New session 11 of user core. Jan 23 19:32:12.877581 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:32:13.783542 sshd[4159]: Connection closed by 10.0.0.1 port 36064 Jan 23 19:32:13.793407 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:13.818589 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:36064.service: Deactivated successfully. Jan 23 19:32:13.819655 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:32:13.832314 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:32:13.843036 systemd-logind[1526]: Removed session 11. Jan 23 19:32:18.826533 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:46630.service - OpenSSH per-connection server daemon (10.0.0.1:46630). Jan 23 19:32:18.985046 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 46630 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:18.989624 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:19.037326 systemd-logind[1526]: New session 12 of user core. Jan 23 19:32:19.065262 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:32:19.470799 sshd[4213]: Connection closed by 10.0.0.1 port 46630 Jan 23 19:32:19.474413 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:19.495120 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:46630.service: Deactivated successfully. Jan 23 19:32:19.519831 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:32:19.525540 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:32:19.536982 systemd-logind[1526]: Removed session 12. Jan 23 19:32:24.500690 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:46736.service - OpenSSH per-connection server daemon (10.0.0.1:46736). Jan 23 19:32:24.613032 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 46736 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:24.617391 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:24.631368 systemd-logind[1526]: New session 13 of user core. Jan 23 19:32:24.639656 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:32:24.935720 sshd[4253]: Connection closed by 10.0.0.1 port 46736 Jan 23 19:32:24.936764 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:24.952514 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:46736.service: Deactivated successfully. Jan 23 19:32:24.957459 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:32:24.961253 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:32:24.968681 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:46738.service - OpenSSH per-connection server daemon (10.0.0.1:46738). Jan 23 19:32:24.972452 systemd-logind[1526]: Removed session 13. Jan 23 19:32:25.083230 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 46738 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:25.086791 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:25.107750 systemd-logind[1526]: New session 14 of user core. Jan 23 19:32:25.118032 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:32:25.542025 sshd[4271]: Connection closed by 10.0.0.1 port 46738 Jan 23 19:32:25.540074 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:25.581734 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:46738.service: Deactivated successfully. Jan 23 19:32:25.590750 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:32:25.597652 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:32:25.606463 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:46748.service - OpenSSH per-connection server daemon (10.0.0.1:46748). Jan 23 19:32:25.627530 systemd-logind[1526]: Removed session 14. Jan 23 19:32:25.765647 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 46748 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:25.769579 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:25.798464 systemd-logind[1526]: New session 15 of user core. Jan 23 19:32:25.812796 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:32:26.071490 sshd[4285]: Connection closed by 10.0.0.1 port 46748 Jan 23 19:32:26.071764 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:26.083649 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:46748.service: Deactivated successfully. Jan 23 19:32:26.089690 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:32:26.094002 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:32:26.099581 systemd-logind[1526]: Removed session 15. Jan 23 19:32:31.125997 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:46762.service - OpenSSH per-connection server daemon (10.0.0.1:46762). Jan 23 19:32:31.279597 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 46762 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:31.289400 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:31.319809 systemd-logind[1526]: New session 16 of user core. Jan 23 19:32:31.336466 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:32:31.672553 sshd[4321]: Connection closed by 10.0.0.1 port 46762 Jan 23 19:32:31.669682 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:31.692564 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:46762.service: Deactivated successfully. Jan 23 19:32:31.706828 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:32:31.717433 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:32:31.728806 systemd-logind[1526]: Removed session 16. Jan 23 19:32:36.737688 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:48532.service - OpenSSH per-connection server daemon (10.0.0.1:48532). Jan 23 19:32:37.156986 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 48532 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:37.161674 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:37.213071 systemd-logind[1526]: New session 17 of user core. Jan 23 19:32:37.271424 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:32:38.198572 sshd[4363]: Connection closed by 10.0.0.1 port 48532 Jan 23 19:32:38.217609 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:38.260087 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:48532.service: Deactivated successfully. Jan 23 19:32:38.278590 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:32:38.282529 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:32:38.298549 systemd-logind[1526]: Removed session 17. Jan 23 19:32:43.230713 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:48538.service - OpenSSH per-connection server daemon (10.0.0.1:48538). Jan 23 19:32:43.396316 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 48538 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:43.398385 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:43.424383 systemd-logind[1526]: New session 18 of user core. Jan 23 19:32:43.444791 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:32:43.900517 sshd[4400]: Connection closed by 10.0.0.1 port 48538 Jan 23 19:32:43.901595 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:43.952519 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:48538.service: Deactivated successfully. Jan 23 19:32:43.978800 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:32:43.992532 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:32:44.019564 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:48540.service - OpenSSH per-connection server daemon (10.0.0.1:48540). Jan 23 19:32:44.027068 systemd-logind[1526]: Removed session 18. Jan 23 19:32:44.345624 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 48540 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:44.378120 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:44.461731 systemd-logind[1526]: New session 19 of user core. Jan 23 19:32:44.501597 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:32:46.401039 sshd[4431]: Connection closed by 10.0.0.1 port 48540 Jan 23 19:32:46.472714 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:46.748747 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:48540.service: Deactivated successfully. Jan 23 19:32:46.765063 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:32:46.784695 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:32:46.799594 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:42556.service - OpenSSH per-connection server daemon (10.0.0.1:42556). Jan 23 19:32:46.810783 systemd-logind[1526]: Removed session 19. Jan 23 19:32:47.179043 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 42556 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:47.202607 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:47.288746 systemd-logind[1526]: New session 20 of user core. Jan 23 19:32:47.315347 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:32:51.010648 sshd[4452]: Connection closed by 10.0.0.1 port 42556 Jan 23 19:32:51.007655 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:51.083664 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:42556.service: Deactivated successfully. Jan 23 19:32:51.084711 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:32:51.118575 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:32:51.125499 systemd[1]: session-20.scope: Consumed 1.387s CPU time, 39.6M memory peak. Jan 23 19:32:51.163608 systemd-logind[1526]: Removed session 20. Jan 23 19:32:51.169525 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:42566.service - OpenSSH per-connection server daemon (10.0.0.1:42566). Jan 23 19:32:51.622793 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 42566 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:51.636766 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:51.742507 systemd-logind[1526]: New session 21 of user core. Jan 23 19:32:51.781782 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:32:53.744059 sshd[4493]: Connection closed by 10.0.0.1 port 42566 Jan 23 19:32:53.745815 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:53.782750 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:42566.service: Deactivated successfully. Jan 23 19:32:53.794754 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:32:53.798753 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:32:53.811336 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:42574.service - OpenSSH per-connection server daemon (10.0.0.1:42574). Jan 23 19:32:53.815054 systemd-logind[1526]: Removed session 21. Jan 23 19:32:54.004315 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 42574 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:54.008263 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:54.042272 systemd-logind[1526]: New session 22 of user core. Jan 23 19:32:54.052565 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:32:54.498772 sshd[4516]: Connection closed by 10.0.0.1 port 42574 Jan 23 19:32:54.502076 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:54.523666 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:42574.service: Deactivated successfully. Jan 23 19:32:54.531344 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:32:54.537113 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:32:54.549805 systemd-logind[1526]: Removed session 22. Jan 23 19:32:59.568589 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:39836.service - OpenSSH per-connection server daemon (10.0.0.1:39836). Jan 23 19:32:59.716047 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 39836 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:59.723620 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:59.752725 systemd-logind[1526]: New session 23 of user core. Jan 23 19:32:59.776308 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:33:00.122034 sshd[4553]: Connection closed by 10.0.0.1 port 39836 Jan 23 19:33:00.122826 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:00.136061 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:39836.service: Deactivated successfully. Jan 23 19:33:00.137101 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:33:00.147811 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:33:00.154379 systemd-logind[1526]: Removed session 23. Jan 23 19:33:05.164448 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:58484.service - OpenSSH per-connection server daemon (10.0.0.1:58484). Jan 23 19:33:05.429351 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 58484 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:33:05.432298 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:05.494397 systemd-logind[1526]: New session 24 of user core. Jan 23 19:33:05.522135 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:33:06.142019 sshd[4603]: Connection closed by 10.0.0.1 port 58484 Jan 23 19:33:06.144798 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:06.186705 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:58484.service: Deactivated successfully. Jan 23 19:33:06.208118 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:33:06.223793 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:33:06.239369 systemd-logind[1526]: Removed session 24. Jan 23 19:33:11.253286 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Jan 23 19:33:11.518541 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:33:11.536324 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:11.572795 systemd-logind[1526]: New session 25 of user core. Jan 23 19:33:11.591641 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:33:12.371059 sshd[4639]: Connection closed by 10.0.0.1 port 58490 Jan 23 19:33:12.373828 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:12.402821 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:58490.service: Deactivated successfully. Jan 23 19:33:12.433330 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:33:12.447500 systemd-logind[1526]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:33:12.456806 systemd-logind[1526]: Removed session 25. Jan 23 19:33:17.482457 systemd[1]: Started sshd@25-10.0.0.124:22-10.0.0.1:43492.service - OpenSSH per-connection server daemon (10.0.0.1:43492). Jan 23 19:33:17.865364 sshd[4677]: Accepted publickey for core from 10.0.0.1 port 43492 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:33:17.869799 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:17.948570 systemd-logind[1526]: New session 26 of user core. Jan 23 19:33:17.980399 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:33:19.115418 sshd[4683]: Connection closed by 10.0.0.1 port 43492 Jan 23 19:33:19.116739 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:19.139543 systemd-logind[1526]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:33:19.144069 systemd[1]: sshd@25-10.0.0.124:22-10.0.0.1:43492.service: Deactivated successfully. Jan 23 19:33:19.151500 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:33:19.159597 systemd-logind[1526]: Removed session 26. Jan 23 19:33:24.207544 systemd[1]: Started sshd@26-10.0.0.124:22-10.0.0.1:43496.service - OpenSSH per-connection server daemon (10.0.0.1:43496). Jan 23 19:33:24.440974 sshd[4721]: Accepted publickey for core from 10.0.0.1 port 43496 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:33:24.453599 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:24.500819 systemd-logind[1526]: New session 27 of user core. Jan 23 19:33:24.522585 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:33:25.073336 sshd[4724]: Connection closed by 10.0.0.1 port 43496 Jan 23 19:33:25.067577 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:25.094458 systemd[1]: sshd@26-10.0.0.124:22-10.0.0.1:43496.service: Deactivated successfully. Jan 23 19:33:25.100594 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:33:25.110667 systemd-logind[1526]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:33:25.125563 systemd-logind[1526]: Removed session 27.