Jan 15 00:45:02.488060 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 22:02:13 -00 2026 Jan 15 00:45:02.488094 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:45:02.488106 kernel: BIOS-provided physical RAM map: Jan 15 00:45:02.488119 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 00:45:02.488129 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 00:45:02.488138 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 00:45:02.488149 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 00:45:02.488160 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 00:45:02.488167 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 00:45:02.488173 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 00:45:02.488179 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 15 00:45:02.488188 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 00:45:02.488194 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 00:45:02.488200 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 00:45:02.488208 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 00:45:02.488214 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 00:45:02.488223 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 00:45:02.488229 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 00:45:02.488235 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 00:45:02.488242 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 00:45:02.488248 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 00:45:02.488254 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 00:45:02.488261 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 00:45:02.488267 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 00:45:02.488273 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 00:45:02.488280 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 00:45:02.488288 kernel: NX (Execute Disable) protection: active Jan 15 00:45:02.488294 kernel: APIC: Static calls initialized Jan 15 00:45:02.488301 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 15 00:45:02.488307 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 15 00:45:02.488314 kernel: extended physical RAM map: Jan 15 00:45:02.488320 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 00:45:02.488327 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 00:45:02.488333 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 00:45:02.488340 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 00:45:02.488346 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 00:45:02.488352 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 00:45:02.488361 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 00:45:02.488367 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 15 00:45:02.488374 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 15 00:45:02.488383 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 15 00:45:02.488392 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 15 00:45:02.488399 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 15 00:45:02.488405 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 00:45:02.488412 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 00:45:02.488420 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 00:45:02.488432 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 00:45:02.488445 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 00:45:02.488455 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 00:45:02.488464 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 00:45:02.488477 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 00:45:02.488488 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 00:45:02.488502 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 00:45:02.488511 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 00:45:02.488521 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 00:45:02.488530 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 00:45:02.488541 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 00:45:02.488554 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 00:45:02.488563 kernel: efi: EFI v2.7 by EDK II Jan 15 00:45:02.488573 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 15 00:45:02.488583 kernel: random: crng init done Jan 15 00:45:02.488599 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 15 00:45:02.488611 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 15 00:45:02.488621 kernel: secureboot: Secure boot disabled Jan 15 00:45:02.488631 kernel: SMBIOS 2.8 present. Jan 15 00:45:02.488640 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 15 00:45:02.488652 kernel: DMI: Memory slots populated: 1/1 Jan 15 00:45:02.488722 kernel: Hypervisor detected: KVM Jan 15 00:45:02.488733 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 00:45:02.488742 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 00:45:02.488752 kernel: kvm-clock: using sched offset of 11872962932 cycles Jan 15 00:45:02.488763 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 00:45:02.488780 kernel: tsc: Detected 2445.424 MHz processor Jan 15 00:45:02.488791 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 00:45:02.488801 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 00:45:02.488811 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 00:45:02.488824 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 15 00:45:02.488834 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 00:45:02.488844 kernel: Using GB pages for direct mapping Jan 15 00:45:02.488858 kernel: ACPI: Early table checksum verification disabled Jan 15 00:45:02.488871 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 15 00:45:02.488885 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 15 00:45:02.488940 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:45:02.488955 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:45:02.488997 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 15 00:45:02.489011 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:45:02.489025 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:45:02.489035 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:45:02.489046 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:45:02.489060 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 15 00:45:02.489070 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 15 00:45:02.489080 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 15 00:45:02.489091 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 15 00:45:02.489108 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 15 00:45:02.489118 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 15 00:45:02.489128 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 15 00:45:02.489138 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 15 00:45:02.489151 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 15 00:45:02.489161 kernel: No NUMA configuration found Jan 15 00:45:02.489171 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 15 00:45:02.489182 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 15 00:45:02.489199 kernel: Zone ranges: Jan 15 00:45:02.489212 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 00:45:02.489224 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 15 00:45:02.489234 kernel: Normal empty Jan 15 00:45:02.489244 kernel: Device empty Jan 15 00:45:02.489253 kernel: Movable zone start for each node Jan 15 00:45:02.489266 kernel: Early memory node ranges Jan 15 00:45:02.489282 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 15 00:45:02.489292 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 15 00:45:02.489299 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 15 00:45:02.489307 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 15 00:45:02.489313 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 15 00:45:02.489320 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 15 00:45:02.489327 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 15 00:45:02.489334 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 15 00:45:02.489344 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 15 00:45:02.489351 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 00:45:02.489365 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 15 00:45:02.489374 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 15 00:45:02.489381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 00:45:02.489389 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 15 00:45:02.489396 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 15 00:45:02.489403 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 15 00:45:02.489411 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 15 00:45:02.489418 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 15 00:45:02.489428 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 00:45:02.489435 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 00:45:02.489442 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 00:45:02.489450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 00:45:02.489459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 00:45:02.489467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 00:45:02.489474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 00:45:02.489481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 00:45:02.489489 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 00:45:02.489496 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 00:45:02.489503 kernel: TSC deadline timer available Jan 15 00:45:02.489512 kernel: CPU topo: Max. logical packages: 1 Jan 15 00:45:02.489520 kernel: CPU topo: Max. logical dies: 1 Jan 15 00:45:02.489527 kernel: CPU topo: Max. dies per package: 1 Jan 15 00:45:02.489534 kernel: CPU topo: Max. threads per core: 1 Jan 15 00:45:02.489541 kernel: CPU topo: Num. cores per package: 4 Jan 15 00:45:02.489549 kernel: CPU topo: Num. threads per package: 4 Jan 15 00:45:02.489557 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 15 00:45:02.489566 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 00:45:02.489574 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 15 00:45:02.489581 kernel: kvm-guest: setup PV sched yield Jan 15 00:45:02.489588 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 15 00:45:02.489596 kernel: Booting paravirtualized kernel on KVM Jan 15 00:45:02.489603 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 00:45:02.489611 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 15 00:45:02.489618 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 15 00:45:02.489627 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 15 00:45:02.489635 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 15 00:45:02.489647 kernel: kvm-guest: PV spinlocks enabled Jan 15 00:45:02.489752 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 00:45:02.489765 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:45:02.489776 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 00:45:02.489795 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 00:45:02.489805 kernel: Fallback order for Node 0: 0 Jan 15 00:45:02.489816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 15 00:45:02.489827 kernel: Policy zone: DMA32 Jan 15 00:45:02.489840 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 00:45:02.489852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 15 00:45:02.489862 kernel: ftrace: allocating 40097 entries in 157 pages Jan 15 00:45:02.489877 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 00:45:02.489889 kernel: Dynamic Preempt: voluntary Jan 15 00:45:02.489942 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 00:45:02.489958 kernel: rcu: RCU event tracing is enabled. Jan 15 00:45:02.489971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 15 00:45:02.489985 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 00:45:02.489996 kernel: Rude variant of Tasks RCU enabled. Jan 15 00:45:02.490006 kernel: Tracing variant of Tasks RCU enabled. Jan 15 00:45:02.490021 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 00:45:02.490034 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 15 00:45:02.490046 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:45:02.490057 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:45:02.490068 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:45:02.490080 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 15 00:45:02.490095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 00:45:02.490110 kernel: Console: colour dummy device 80x25 Jan 15 00:45:02.490121 kernel: printk: legacy console [ttyS0] enabled Jan 15 00:45:02.490132 kernel: ACPI: Core revision 20240827 Jan 15 00:45:02.490145 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 00:45:02.490158 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 00:45:02.490170 kernel: x2apic enabled Jan 15 00:45:02.490180 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 00:45:02.490194 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 15 00:45:02.490208 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 15 00:45:02.490220 kernel: kvm-guest: setup PV IPIs Jan 15 00:45:02.490230 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 00:45:02.490241 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 15 00:45:02.490254 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 15 00:45:02.490266 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 00:45:02.490280 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 15 00:45:02.490291 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 15 00:45:02.490305 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 00:45:02.490316 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 00:45:02.490327 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 00:45:02.490338 kernel: Speculative Store Bypass: Vulnerable Jan 15 00:45:02.490351 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 15 00:45:02.490366 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 15 00:45:02.490377 kernel: active return thunk: srso_alias_return_thunk Jan 15 00:45:02.490389 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 15 00:45:02.490402 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 15 00:45:02.490413 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 00:45:02.490423 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 00:45:02.490437 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 00:45:02.490452 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 00:45:02.490463 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 00:45:02.490474 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 15 00:45:02.490488 kernel: Freeing SMP alternatives memory: 32K Jan 15 00:45:02.490499 kernel: pid_max: default: 32768 minimum: 301 Jan 15 00:45:02.490509 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 00:45:02.490521 kernel: landlock: Up and running. Jan 15 00:45:02.490538 kernel: SELinux: Initializing. Jan 15 00:45:02.490549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 00:45:02.490560 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 00:45:02.490572 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 15 00:45:02.490585 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 15 00:45:02.490598 kernel: signal: max sigframe size: 1776 Jan 15 00:45:02.490608 kernel: rcu: Hierarchical SRCU implementation. Jan 15 00:45:02.490623 kernel: rcu: Max phase no-delay instances is 400. Jan 15 00:45:02.490635 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 00:45:02.490648 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 00:45:02.490721 kernel: smp: Bringing up secondary CPUs ... Jan 15 00:45:02.490734 kernel: smpboot: x86: Booting SMP configuration: Jan 15 00:45:02.490745 kernel: .... node #0, CPUs: #1 #2 #3 Jan 15 00:45:02.490755 kernel: smp: Brought up 1 node, 4 CPUs Jan 15 00:45:02.490772 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 15 00:45:02.490787 kernel: Memory: 2441100K/2565800K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15432K init, 2608K bss, 118764K reserved, 0K cma-reserved) Jan 15 00:45:02.490797 kernel: devtmpfs: initialized Jan 15 00:45:02.490808 kernel: x86/mm: Memory block size: 128MB Jan 15 00:45:02.490818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 15 00:45:02.490833 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 15 00:45:02.490844 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 15 00:45:02.490859 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 15 00:45:02.490870 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 15 00:45:02.490884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 15 00:45:02.490931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 00:45:02.490944 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 15 00:45:02.490955 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 00:45:02.490968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 00:45:02.490985 kernel: audit: initializing netlink subsys (disabled) Jan 15 00:45:02.490996 kernel: audit: type=2000 audit(1768437898.383:1): state=initialized audit_enabled=0 res=1 Jan 15 00:45:02.491006 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 00:45:02.491017 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 00:45:02.491031 kernel: cpuidle: using governor menu Jan 15 00:45:02.491042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 00:45:02.491052 kernel: dca service started, version 1.12.1 Jan 15 00:45:02.491067 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 15 00:45:02.491081 kernel: PCI: Using configuration type 1 for base access Jan 15 00:45:02.491093 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 00:45:02.491104 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 00:45:02.491115 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 00:45:02.491128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 00:45:02.491139 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 00:45:02.491157 kernel: ACPI: Added _OSI(Module Device) Jan 15 00:45:02.491168 kernel: ACPI: Added _OSI(Processor Device) Jan 15 00:45:02.491178 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 00:45:02.491189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 00:45:02.491203 kernel: ACPI: Interpreter enabled Jan 15 00:45:02.491213 kernel: ACPI: PM: (supports S0 S3 S5) Jan 15 00:45:02.491224 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 00:45:02.491234 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 00:45:02.491252 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 00:45:02.491262 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 00:45:02.491273 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 00:45:02.491575 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 00:45:02.491867 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 15 00:45:02.492095 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 15 00:45:02.492107 kernel: PCI host bridge to bus 0000:00 Jan 15 00:45:02.492278 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 00:45:02.492462 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 00:45:02.492753 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 00:45:02.493021 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 15 00:45:02.493210 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 15 00:45:02.494018 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 15 00:45:02.494253 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 00:45:02.494521 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 15 00:45:02.494851 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 15 00:45:02.495154 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 15 00:45:02.495393 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 15 00:45:02.495637 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 15 00:45:02.495986 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 00:45:02.496247 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 00:45:02.514547 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 15 00:45:02.514937 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 15 00:45:02.515134 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 15 00:45:02.515324 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 00:45:02.515497 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 15 00:45:02.515721 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 15 00:45:02.515982 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 15 00:45:02.516253 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 00:45:02.516432 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 15 00:45:02.516644 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 15 00:45:02.516881 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 15 00:45:02.517085 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 15 00:45:02.517264 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 15 00:45:02.517439 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 00:45:02.517616 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 15 00:45:02.517836 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 15 00:45:02.518041 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 15 00:45:02.518221 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 15 00:45:02.518396 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 15 00:45:02.518407 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 00:45:02.518415 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 00:45:02.518423 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 00:45:02.518430 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 00:45:02.518438 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 00:45:02.518445 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 00:45:02.518456 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 00:45:02.518464 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 00:45:02.518472 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 00:45:02.518480 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 00:45:02.518488 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 00:45:02.518495 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 00:45:02.518503 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 00:45:02.518512 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 00:45:02.518520 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 00:45:02.518527 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 00:45:02.518535 kernel: iommu: Default domain type: Translated Jan 15 00:45:02.518542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 00:45:02.518550 kernel: efivars: Registered efivars operations Jan 15 00:45:02.518557 kernel: PCI: Using ACPI for IRQ routing Jan 15 00:45:02.518567 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 00:45:02.518575 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 15 00:45:02.518582 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 15 00:45:02.518589 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 15 00:45:02.518597 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 15 00:45:02.518604 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 15 00:45:02.518611 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 15 00:45:02.518621 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 15 00:45:02.518628 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 15 00:45:02.518844 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 00:45:02.519136 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 00:45:02.519759 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 00:45:02.519797 kernel: vgaarb: loaded Jan 15 00:45:02.519818 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 00:45:02.519855 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 00:45:02.519876 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 00:45:02.519884 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 00:45:02.519947 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 00:45:02.519968 kernel: pnp: PnP ACPI init Jan 15 00:45:02.520406 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 15 00:45:02.520461 kernel: pnp: PnP ACPI: found 6 devices Jan 15 00:45:02.520482 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 00:45:02.520515 kernel: NET: Registered PF_INET protocol family Jan 15 00:45:02.520535 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 00:45:02.520555 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 00:45:02.520575 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 00:45:02.520607 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 00:45:02.520756 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 00:45:02.520792 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 00:45:02.520813 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 00:45:02.520845 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 00:45:02.520864 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 00:45:02.520884 kernel: NET: Registered PF_XDP protocol family Jan 15 00:45:02.521312 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 15 00:45:02.521985 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 15 00:45:02.522294 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 00:45:02.522602 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 00:45:02.522834 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 00:45:02.523055 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 15 00:45:02.523305 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 15 00:45:02.523477 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 15 00:45:02.523489 kernel: PCI: CLS 0 bytes, default 64 Jan 15 00:45:02.523497 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 15 00:45:02.523505 kernel: Initialise system trusted keyrings Jan 15 00:45:02.523513 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 00:45:02.523521 kernel: Key type asymmetric registered Jan 15 00:45:02.523532 kernel: Asymmetric key parser 'x509' registered Jan 15 00:45:02.523542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 00:45:02.523550 kernel: io scheduler mq-deadline registered Jan 15 00:45:02.523558 kernel: io scheduler kyber registered Jan 15 00:45:02.523566 kernel: io scheduler bfq registered Jan 15 00:45:02.523574 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 00:45:02.523583 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 00:45:02.523591 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 00:45:02.523601 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 00:45:02.523611 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 00:45:02.523619 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 00:45:02.523627 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 00:45:02.523635 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 00:45:02.523645 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 00:45:02.523930 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 15 00:45:02.523950 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 00:45:02.524122 kernel: rtc_cmos 00:04: registered as rtc0 Jan 15 00:45:02.524290 kernel: rtc_cmos 00:04: setting system clock to 2026-01-15T00:45:00 UTC (1768437900) Jan 15 00:45:02.524549 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 15 00:45:02.524576 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 15 00:45:02.524590 kernel: efifb: probing for efifb Jan 15 00:45:02.524604 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 15 00:45:02.524617 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 15 00:45:02.524630 kernel: efifb: scrolling: redraw Jan 15 00:45:02.524644 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 00:45:02.524715 kernel: Console: switching to colour frame buffer device 160x50 Jan 15 00:45:02.524735 kernel: fb0: EFI VGA frame buffer device Jan 15 00:45:02.524749 kernel: pstore: Using crash dump compression: deflate Jan 15 00:45:02.524762 kernel: pstore: Registered efi_pstore as persistent store backend Jan 15 00:45:02.524776 kernel: NET: Registered PF_INET6 protocol family Jan 15 00:45:02.524789 kernel: Segment Routing with IPv6 Jan 15 00:45:02.524802 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 00:45:02.524816 kernel: NET: Registered PF_PACKET protocol family Jan 15 00:45:02.524832 kernel: Key type dns_resolver registered Jan 15 00:45:02.524846 kernel: IPI shorthand broadcast: enabled Jan 15 00:45:02.524859 kernel: sched_clock: Marking stable (2296022870, 817931069)->(3336124992, -222171053) Jan 15 00:45:02.524874 kernel: registered taskstats version 1 Jan 15 00:45:02.524886 kernel: Loading compiled-in X.509 certificates Jan 15 00:45:02.524936 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8b6753a1cbf8103f5806ce5d59781743c62fae9' Jan 15 00:45:02.524951 kernel: Demotion targets for Node 0: null Jan 15 00:45:02.524968 kernel: Key type .fscrypt registered Jan 15 00:45:02.524981 kernel: Key type fscrypt-provisioning registered Jan 15 00:45:02.524994 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 00:45:02.525007 kernel: ima: Allocated hash algorithm: sha1 Jan 15 00:45:02.525022 kernel: ima: No architecture policies found Jan 15 00:45:02.525034 kernel: clk: Disabling unused clocks Jan 15 00:45:02.525048 kernel: Freeing unused kernel image (initmem) memory: 15432K Jan 15 00:45:02.525064 kernel: Write protecting the kernel read-only data: 45056k Jan 15 00:45:02.525079 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 15 00:45:02.525093 kernel: Run /init as init process Jan 15 00:45:02.525107 kernel: with arguments: Jan 15 00:45:02.525123 kernel: /init Jan 15 00:45:02.525136 kernel: with environment: Jan 15 00:45:02.525150 kernel: HOME=/ Jan 15 00:45:02.525159 kernel: TERM=linux Jan 15 00:45:02.525171 kernel: SCSI subsystem initialized Jan 15 00:45:02.525179 kernel: libata version 3.00 loaded. Jan 15 00:45:02.525366 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 00:45:02.525378 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 00:45:02.525548 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 15 00:45:02.525814 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 15 00:45:02.526030 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 00:45:02.526232 kernel: scsi host0: ahci Jan 15 00:45:02.526470 kernel: scsi host1: ahci Jan 15 00:45:02.526724 kernel: scsi host2: ahci Jan 15 00:45:02.526948 kernel: scsi host3: ahci Jan 15 00:45:02.527177 kernel: scsi host4: ahci Jan 15 00:45:02.527401 kernel: scsi host5: ahci Jan 15 00:45:02.527425 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 15 00:45:02.527441 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 15 00:45:02.527454 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 15 00:45:02.527462 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 15 00:45:02.527470 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 15 00:45:02.527482 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 15 00:45:02.527490 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 00:45:02.527498 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 00:45:02.527506 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 15 00:45:02.527514 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 00:45:02.527522 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 00:45:02.527530 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 00:45:02.527540 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 00:45:02.527548 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 15 00:45:02.527556 kernel: ata3.00: applying bridge limits Jan 15 00:45:02.527564 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 00:45:02.527571 kernel: ata3.00: configured for UDMA/100 Jan 15 00:45:02.527833 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 15 00:45:02.528063 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 15 00:45:02.528246 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 15 00:45:02.528258 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 00:45:02.528266 kernel: GPT:16515071 != 27000831 Jan 15 00:45:02.528274 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 00:45:02.528282 kernel: GPT:16515071 != 27000831 Jan 15 00:45:02.528290 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 00:45:02.528301 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 00:45:02.528488 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 15 00:45:02.528499 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 00:45:02.528780 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 15 00:45:02.528793 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 00:45:02.528801 kernel: device-mapper: uevent: version 1.0.3 Jan 15 00:45:02.528810 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 00:45:02.528822 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 00:45:02.528830 kernel: raid6: avx2x4 gen() 27593 MB/s Jan 15 00:45:02.528837 kernel: raid6: avx2x2 gen() 33761 MB/s Jan 15 00:45:02.528845 kernel: raid6: avx2x1 gen() 23980 MB/s Jan 15 00:45:02.528853 kernel: raid6: using algorithm avx2x2 gen() 33761 MB/s Jan 15 00:45:02.528861 kernel: raid6: .... xor() 20642 MB/s, rmw enabled Jan 15 00:45:02.528869 kernel: raid6: using avx2x2 recovery algorithm Jan 15 00:45:02.528880 kernel: xor: automatically using best checksumming function avx Jan 15 00:45:02.528887 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 00:45:02.528932 kernel: BTRFS: device fsid 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 15 00:45:02.528941 kernel: BTRFS info (device dm-0): first mount of filesystem 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 Jan 15 00:45:02.528950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:45:02.528958 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 00:45:02.528966 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 00:45:02.528976 kernel: loop: module loaded Jan 15 00:45:02.528985 kernel: loop0: detected capacity change from 0 to 100160 Jan 15 00:45:02.528993 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 00:45:02.529002 systemd[1]: Successfully made /usr/ read-only. Jan 15 00:45:02.529013 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:45:02.529022 systemd[1]: Detected virtualization kvm. Jan 15 00:45:02.529033 systemd[1]: Detected architecture x86-64. Jan 15 00:45:02.529041 systemd[1]: Running in initrd. Jan 15 00:45:02.529049 systemd[1]: No hostname configured, using default hostname. Jan 15 00:45:02.529058 systemd[1]: Hostname set to . Jan 15 00:45:02.529066 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:45:02.529075 systemd[1]: Queued start job for default target initrd.target. Jan 15 00:45:02.529085 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:45:02.529096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:45:02.529104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:45:02.529114 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 00:45:02.529122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:45:02.529131 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 00:45:02.529142 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 00:45:02.529150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:45:02.529158 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:45:02.529167 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:45:02.529175 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:45:02.529184 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:45:02.529192 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:45:02.529202 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:45:02.529211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:45:02.529219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:45:02.529228 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:45:02.529236 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 00:45:02.529245 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 00:45:02.529253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:45:02.529264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:45:02.529272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:45:02.529280 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:45:02.529288 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 00:45:02.529297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 00:45:02.529305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:45:02.529315 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 00:45:02.529324 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 00:45:02.529333 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 00:45:02.529341 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:45:02.529349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:45:02.529360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:45:02.529368 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 00:45:02.529377 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:45:02.529411 systemd-journald[320]: Collecting audit messages is enabled. Jan 15 00:45:02.529435 kernel: audit: type=1130 audit(1768437902.485:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.529443 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 00:45:02.529452 kernel: audit: type=1130 audit(1768437902.495:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.529461 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 00:45:02.529472 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 00:45:02.529480 systemd-journald[320]: Journal started Jan 15 00:45:02.529498 systemd-journald[320]: Runtime Journal (/run/log/journal/535e1f1062e74f0e8432330a337be9f6) is 6M, max 48.1M, 42M free. Jan 15 00:45:02.533847 kernel: Bridge firewalling registered Jan 15 00:45:02.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.531839 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 15 00:45:02.536724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:45:02.540735 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:45:02.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.552736 kernel: audit: type=1130 audit(1768437902.535:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.562722 kernel: audit: type=1130 audit(1768437902.554:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.562734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:45:02.576163 kernel: audit: type=1130 audit(1768437902.565:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.578185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 00:45:02.580103 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:45:02.607310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:45:02.610821 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:45:02.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.614470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:45:02.620618 kernel: audit: type=1130 audit(1768437902.611:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.634754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:45:02.650024 kernel: audit: type=1130 audit(1768437902.638:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.652359 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 00:45:02.653155 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 00:45:02.666327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:45:02.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.673854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:45:02.689881 kernel: audit: type=1130 audit(1768437902.669:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.689934 kernel: audit: type=1130 audit(1768437902.679:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.689752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:45:02.694529 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:45:02.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.715000 audit: BPF prog-id=6 op=LOAD Jan 15 00:45:02.717108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:45:02.794421 systemd-resolved[388]: Positive Trust Anchors: Jan 15 00:45:02.794460 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:45:02.794467 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:45:02.794513 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:45:02.844979 systemd-resolved[388]: Defaulting to hostname 'linux'. Jan 15 00:45:02.846413 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:45:02.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.847792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:45:02.861858 kernel: Loading iSCSI transport class v2.0-870. Jan 15 00:45:02.877741 kernel: iscsi: registered transport (tcp) Jan 15 00:45:02.903362 kernel: iscsi: registered transport (qla4xxx) Jan 15 00:45:02.903433 kernel: QLogic iSCSI HBA Driver Jan 15 00:45:02.941550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:45:02.972149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:45:02.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:02.974281 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:45:03.046292 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 00:45:03.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.049327 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 00:45:03.065190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 00:45:03.105840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:45:03.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.109000 audit: BPF prog-id=7 op=LOAD Jan 15 00:45:03.109000 audit: BPF prog-id=8 op=LOAD Jan 15 00:45:03.110589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:45:03.154294 systemd-udevd[592]: Using default interface naming scheme 'v257'. Jan 15 00:45:03.172776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:45:03.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.178550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 00:45:03.209935 dracut-pre-trigger[645]: rd.md=0: removing MD RAID activation Jan 15 00:45:03.244962 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:45:03.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.252418 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:45:03.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.253000 audit: BPF prog-id=9 op=LOAD Jan 15 00:45:03.255345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:45:03.263822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:45:03.316388 systemd-networkd[733]: lo: Link UP Jan 15 00:45:03.316410 systemd-networkd[733]: lo: Gained carrier Jan 15 00:45:03.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.317087 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:45:03.319404 systemd[1]: Reached target network.target - Network. Jan 15 00:45:03.370771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:45:03.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.376456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 00:45:03.448515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 00:45:03.469058 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 00:45:03.475959 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 00:45:03.491361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:45:03.501817 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 00:45:03.519953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:45:03.549259 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 00:45:03.549283 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 15 00:45:03.549300 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 15 00:45:03.549312 kernel: audit: type=1131 audit(1768437903.527:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.520091 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:45:03.528237 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:45:03.546967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:45:03.568311 disk-uuid[780]: Primary Header is updated. Jan 15 00:45:03.568311 disk-uuid[780]: Secondary Entries is updated. Jan 15 00:45:03.568311 disk-uuid[780]: Secondary Header is updated. Jan 15 00:45:03.564510 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:45:03.564515 systemd-networkd[733]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:45:03.565102 systemd-networkd[733]: eth0: Link UP Jan 15 00:45:03.590706 kernel: AES CTR mode by8 optimization enabled Jan 15 00:45:03.572768 systemd-networkd[733]: eth0: Gained carrier Jan 15 00:45:03.572811 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:45:03.604752 systemd-networkd[733]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 00:45:03.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.639836 kernel: audit: type=1130 audit(1768437903.629:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.648228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:45:03.648339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:45:03.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.660194 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:45:03.670795 kernel: audit: type=1131 audit(1768437903.659:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.677579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:45:03.697933 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 00:45:03.712281 kernel: audit: type=1130 audit(1768437903.701:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.716009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:45:03.716142 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:45:03.722458 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:45:03.725031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 00:45:03.764283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:45:03.777098 kernel: audit: type=1130 audit(1768437903.763:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.781344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:45:03.796985 kernel: audit: type=1130 audit(1768437903.780:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:03.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.631219 disk-uuid[781]: Warning: The kernel is still using the old partition table. Jan 15 00:45:04.631219 disk-uuid[781]: The new table will be used at the next reboot or after you Jan 15 00:45:04.631219 disk-uuid[781]: run partprobe(8) or kpartx(8) Jan 15 00:45:04.631219 disk-uuid[781]: The operation has completed successfully. Jan 15 00:45:04.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.642588 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 00:45:04.659296 kernel: audit: type=1130 audit(1768437904.645:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.659329 kernel: audit: type=1131 audit(1768437904.645:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.642857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 00:45:04.648108 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 00:45:04.710559 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Jan 15 00:45:04.710627 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:45:04.710650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:45:04.720934 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:45:04.720989 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:45:04.733739 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:45:04.736069 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 00:45:04.754826 kernel: audit: type=1130 audit(1768437904.739:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.741840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 00:45:04.874557 ignition[897]: Ignition 2.22.0 Jan 15 00:45:04.874602 ignition[897]: Stage: fetch-offline Jan 15 00:45:04.874729 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:45:04.874749 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:45:04.874836 ignition[897]: parsed url from cmdline: "" Jan 15 00:45:04.874840 ignition[897]: no config URL provided Jan 15 00:45:04.874846 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 00:45:04.874857 ignition[897]: no config at "/usr/lib/ignition/user.ign" Jan 15 00:45:04.874937 ignition[897]: op(1): [started] loading QEMU firmware config module Jan 15 00:45:04.874942 ignition[897]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 15 00:45:04.899631 ignition[897]: op(1): [finished] loading QEMU firmware config module Jan 15 00:45:04.965192 ignition[897]: parsing config with SHA512: 9867099e14f2b8033ea04189f23a67644bf63d6998606f28ff59cebc0de7eb6bed89d0155cbb6bb8a4844422a588c3a3a65983934241ea6b49e3c9fe7df7971b Jan 15 00:45:04.973261 unknown[897]: fetched base config from "system" Jan 15 00:45:04.973288 unknown[897]: fetched user config from "qemu" Jan 15 00:45:04.978396 ignition[897]: fetch-offline: fetch-offline passed Jan 15 00:45:04.978484 ignition[897]: Ignition finished successfully Jan 15 00:45:04.986210 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:45:04.998927 kernel: audit: type=1130 audit(1768437904.985:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:04.986885 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 15 00:45:04.988057 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 00:45:05.041431 ignition[908]: Ignition 2.22.0 Jan 15 00:45:05.041477 ignition[908]: Stage: kargs Jan 15 00:45:05.041646 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:45:05.041719 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:45:05.047595 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 00:45:05.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:05.042775 ignition[908]: kargs: kargs passed Jan 15 00:45:05.058101 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 00:45:05.042842 ignition[908]: Ignition finished successfully Jan 15 00:45:05.116174 ignition[916]: Ignition 2.22.0 Jan 15 00:45:05.116207 ignition[916]: Stage: disks Jan 15 00:45:05.116369 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:45:05.116381 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:45:05.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:05.122721 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 00:45:05.117096 ignition[916]: disks: disks passed Jan 15 00:45:05.124368 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 00:45:05.117160 ignition[916]: Ignition finished successfully Jan 15 00:45:05.132395 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 00:45:05.138114 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:45:05.142338 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:45:05.142433 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:45:05.149277 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 00:45:05.202123 systemd-fsck[926]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 00:45:05.208562 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 00:45:05.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:05.210184 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 00:45:05.361728 kernel: EXT4-fs (vda9): mounted filesystem 6f459a58-5046-4124-bfbc-09321f1e67d8 r/w with ordered data mode. Quota mode: none. Jan 15 00:45:05.362637 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 00:45:05.368511 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 00:45:05.377123 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:45:05.382825 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 00:45:05.387840 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 00:45:05.387936 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 00:45:05.387967 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:45:05.411228 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 00:45:05.438598 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (934) Jan 15 00:45:05.438626 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:45:05.438637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:45:05.438648 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:45:05.438702 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:45:05.413317 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 00:45:05.444115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:45:05.450956 systemd-networkd[733]: eth0: Gained IPv6LL Jan 15 00:45:05.488226 initrd-setup-root[958]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 00:45:05.494100 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Jan 15 00:45:05.500242 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 00:45:05.506145 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 00:45:05.731815 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 00:45:05.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:05.737578 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 00:45:05.746563 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 00:45:05.823793 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 00:45:05.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:05.836101 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 00:45:05.842735 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:45:05.891222 ignition[1051]: INFO : Ignition 2.22.0 Jan 15 00:45:05.891222 ignition[1051]: INFO : Stage: mount Jan 15 00:45:05.895464 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:45:05.895464 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:45:05.895464 ignition[1051]: INFO : mount: mount passed Jan 15 00:45:05.895464 ignition[1051]: INFO : Ignition finished successfully Jan 15 00:45:05.907597 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 00:45:05.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:05.912189 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 00:45:05.939371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:45:05.968794 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1060) Jan 15 00:45:05.977122 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:45:05.977161 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:45:05.986085 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:45:05.986118 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:45:05.989023 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:45:06.047950 ignition[1077]: INFO : Ignition 2.22.0 Jan 15 00:45:06.050833 ignition[1077]: INFO : Stage: files Jan 15 00:45:06.050833 ignition[1077]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:45:06.050833 ignition[1077]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:45:06.050833 ignition[1077]: DEBUG : files: compiled without relabeling support, skipping Jan 15 00:45:06.064892 ignition[1077]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 00:45:06.064892 ignition[1077]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 00:45:06.064892 ignition[1077]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 00:45:06.064892 ignition[1077]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 00:45:06.064892 ignition[1077]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 00:45:06.064892 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:45:06.064892 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 15 00:45:06.058018 unknown[1077]: wrote ssh authorized keys file for user: core Jan 15 00:45:06.151986 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 00:45:06.293369 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:45:06.293369 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:45:06.305382 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:45:06.365313 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:45:06.365313 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:45:06.365313 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 15 00:45:06.641282 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 00:45:07.519220 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:45:07.519220 ignition[1077]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 15 00:45:07.536753 ignition[1077]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 15 00:45:07.597943 ignition[1077]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 00:45:07.597943 ignition[1077]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 00:45:07.607577 ignition[1077]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 15 00:45:07.607577 ignition[1077]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 15 00:45:07.607577 ignition[1077]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 00:45:07.607577 ignition[1077]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:45:07.607577 ignition[1077]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:45:07.607577 ignition[1077]: INFO : files: files passed Jan 15 00:45:07.607577 ignition[1077]: INFO : Ignition finished successfully Jan 15 00:45:07.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.618077 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 00:45:07.631498 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 00:45:07.637850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 00:45:07.681422 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 00:45:07.684412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 00:45:07.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.691829 initrd-setup-root-after-ignition[1109]: grep: /sysroot/oem/oem-release: No such file or directory Jan 15 00:45:07.697499 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:45:07.697499 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:45:07.715089 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:45:07.723419 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:45:07.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.724030 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 00:45:07.740124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 00:45:07.837816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 00:45:07.838094 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 00:45:07.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.845975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 00:45:07.863496 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 00:45:07.868072 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 00:45:07.869280 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 00:45:07.913480 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:45:07.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.915390 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 00:45:07.954329 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:45:07.955318 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:45:07.963578 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:45:07.970607 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 00:45:07.977120 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 00:45:07.977356 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:45:07.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:07.991952 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 00:45:07.995500 systemd[1]: Stopped target basic.target - Basic System. Jan 15 00:45:08.003158 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 00:45:08.006871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:45:08.019418 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 00:45:08.023137 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:45:08.032321 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 00:45:08.035574 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:45:08.042155 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 00:45:08.045857 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 00:45:08.066203 systemd[1]: Stopped target swap.target - Swaps. Jan 15 00:45:08.068765 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 00:45:08.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.068935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:45:08.077601 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:45:08.080609 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:45:08.086305 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 00:45:08.086593 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:45:08.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.095880 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 00:45:08.096078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 00:45:08.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.106111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 00:45:08.106310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:45:08.109328 systemd[1]: Stopped target paths.target - Path Units. Jan 15 00:45:08.115379 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 00:45:08.115817 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:45:08.121105 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 00:45:08.135270 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 00:45:08.137980 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 00:45:08.138101 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:45:08.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.154237 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 00:45:08.154387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:45:08.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.163512 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 00:45:08.163643 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:45:08.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.180067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 00:45:08.180233 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:45:08.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.183948 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 00:45:08.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.184109 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 00:45:08.198975 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 00:45:08.201314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 00:45:08.201451 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:45:08.213861 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 00:45:08.214969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 00:45:08.215081 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:45:08.273262 ignition[1135]: INFO : Ignition 2.22.0 Jan 15 00:45:08.273262 ignition[1135]: INFO : Stage: umount Jan 15 00:45:08.273262 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:45:08.273262 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:45:08.273262 ignition[1135]: INFO : umount: umount passed Jan 15 00:45:08.273262 ignition[1135]: INFO : Ignition finished successfully Jan 15 00:45:08.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.223830 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 00:45:08.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.224026 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:45:08.234389 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 00:45:08.234529 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:45:08.271242 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 00:45:08.271549 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 00:45:08.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.341000 audit: BPF prog-id=6 op=UNLOAD Jan 15 00:45:08.279611 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 00:45:08.279993 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 00:45:08.286532 systemd[1]: Stopped target network.target - Network. Jan 15 00:45:08.289230 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 00:45:08.289316 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 00:45:08.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.297475 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 00:45:08.297554 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 00:45:08.302955 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 00:45:08.303032 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 00:45:08.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.375000 audit: BPF prog-id=9 op=UNLOAD Jan 15 00:45:08.306068 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 00:45:08.306136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 00:45:08.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.312411 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 00:45:08.319510 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 00:45:08.324405 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 00:45:08.331021 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 00:45:08.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.331180 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 00:45:08.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.347138 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 00:45:08.347332 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 00:45:08.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.374081 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 00:45:08.374225 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 00:45:08.376508 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 00:45:08.383619 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 00:45:08.383765 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:45:08.387043 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 00:45:08.387110 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 00:45:08.399000 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 00:45:08.404322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 00:45:08.404390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:45:08.407102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 00:45:08.407160 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:45:08.415732 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 00:45:08.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.415784 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 00:45:08.424981 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:45:08.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.470097 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 00:45:08.470330 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:45:08.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.478843 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 00:45:08.478978 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 00:45:08.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.486464 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 00:45:08.486523 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:45:08.487750 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 00:45:08.487823 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:45:08.546810 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 15 00:45:08.546834 kernel: audit: type=1131 audit(1768437908.529:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.499884 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 00:45:08.566838 kernel: audit: type=1131 audit(1768437908.549:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.499982 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 00:45:08.582546 kernel: audit: type=1131 audit(1768437908.570:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.510392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 00:45:08.510465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:45:08.601020 kernel: audit: type=1131 audit(1768437908.586:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.521157 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 00:45:08.623941 kernel: audit: type=1130 audit(1768437908.603:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.623975 kernel: audit: type=1131 audit(1768437908.603:83): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:08.529353 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 00:45:08.529468 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:45:08.530571 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 00:45:08.530622 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:45:08.550842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:45:08.550963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:45:08.571768 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 00:45:08.584600 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 00:45:08.596295 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 00:45:08.596446 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 00:45:08.604597 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 00:45:08.628021 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 00:45:08.674538 systemd[1]: Switching root. Jan 15 00:45:08.728505 systemd-journald[320]: Journal stopped Jan 15 00:45:10.723425 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 15 00:45:10.723524 kernel: audit: type=1335 audit(1768437908.742:84): pid=320 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 15 00:45:10.723567 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 00:45:10.723593 kernel: SELinux: policy capability open_perms=1 Jan 15 00:45:10.723613 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 00:45:10.723632 kernel: SELinux: policy capability always_check_network=0 Jan 15 00:45:10.723649 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 00:45:10.723736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 00:45:10.723766 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 00:45:10.723790 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 00:45:10.723807 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 00:45:10.723825 kernel: audit: type=1403 audit(1768437908.964:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 00:45:10.723845 systemd[1]: Successfully loaded SELinux policy in 92.107ms. Jan 15 00:45:10.723872 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.420ms. Jan 15 00:45:10.723945 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:45:10.723969 systemd[1]: Detected virtualization kvm. Jan 15 00:45:10.723990 systemd[1]: Detected architecture x86-64. Jan 15 00:45:10.724007 systemd[1]: Detected first boot. Jan 15 00:45:10.724026 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:45:10.724046 kernel: audit: type=1334 audit(1768437909.071:86): prog-id=10 op=LOAD Jan 15 00:45:10.724064 kernel: audit: type=1334 audit(1768437909.071:87): prog-id=10 op=UNLOAD Jan 15 00:45:10.724087 zram_generator::config[1180]: No configuration found. Jan 15 00:45:10.724109 kernel: Guest personality initialized and is inactive Jan 15 00:45:10.724129 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 00:45:10.724146 kernel: Initialized host personality Jan 15 00:45:10.724165 kernel: NET: Registered PF_VSOCK protocol family Jan 15 00:45:10.724191 systemd[1]: Populated /etc with preset unit settings. Jan 15 00:45:10.724211 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 00:45:10.724235 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 00:45:10.724259 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 00:45:10.724286 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 00:45:10.724308 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 00:45:10.724328 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 00:45:10.724347 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 00:45:10.724369 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 00:45:10.724390 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 00:45:10.724420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 00:45:10.724444 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 00:45:10.724468 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:45:10.724488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:45:10.724506 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 00:45:10.724526 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 00:45:10.724548 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 00:45:10.724567 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:45:10.724590 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 00:45:10.724610 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:45:10.724629 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:45:10.724647 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 00:45:10.724735 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 00:45:10.724757 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 00:45:10.724779 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 00:45:10.724804 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:45:10.724830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:45:10.724851 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 00:45:10.724872 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:45:10.724890 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:45:10.724949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 00:45:10.724971 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 00:45:10.724996 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 00:45:10.725016 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:45:10.725034 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 00:45:10.725054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:45:10.725074 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 00:45:10.725093 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 00:45:10.725112 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:45:10.725132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:45:10.725157 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 00:45:10.725175 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 00:45:10.725194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 00:45:10.725215 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 00:45:10.725234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:45:10.725255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 00:45:10.725278 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 00:45:10.725299 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 00:45:10.725319 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 00:45:10.725337 systemd[1]: Reached target machines.target - Containers. Jan 15 00:45:10.725358 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 00:45:10.725378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:45:10.725398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:45:10.725421 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 00:45:10.725440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:45:10.725460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:45:10.725479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:45:10.725497 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 00:45:10.725517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:45:10.725537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 00:45:10.725560 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 00:45:10.725579 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 00:45:10.725599 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 00:45:10.725619 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 00:45:10.725639 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:45:10.725722 kernel: fuse: init (API version 7.41) Jan 15 00:45:10.725747 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:45:10.725768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:45:10.725786 kernel: ACPI: bus type drm_connector registered Jan 15 00:45:10.725804 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:45:10.725824 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 00:45:10.725849 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 00:45:10.725895 systemd-journald[1266]: Collecting audit messages is enabled. Jan 15 00:45:10.725972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:45:10.725995 systemd-journald[1266]: Journal started Jan 15 00:45:10.726030 systemd-journald[1266]: Runtime Journal (/run/log/journal/535e1f1062e74f0e8432330a337be9f6) is 6M, max 48.1M, 42M free. Jan 15 00:45:10.290000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 00:45:10.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.634000 audit: BPF prog-id=14 op=UNLOAD Jan 15 00:45:10.634000 audit: BPF prog-id=13 op=UNLOAD Jan 15 00:45:10.636000 audit: BPF prog-id=15 op=LOAD Jan 15 00:45:10.638000 audit: BPF prog-id=16 op=LOAD Jan 15 00:45:10.638000 audit: BPF prog-id=17 op=LOAD Jan 15 00:45:10.719000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 00:45:10.719000 audit[1266]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd693a5e30 a2=4000 a3=0 items=0 ppid=1 pid=1266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:45:10.719000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 00:45:09.911256 systemd[1]: Queued start job for default target multi-user.target. Jan 15 00:45:09.942099 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 00:45:09.943350 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 00:45:10.741764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:45:10.749756 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:45:10.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.760387 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 00:45:10.771389 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 00:45:10.776614 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 00:45:10.780985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 00:45:10.786184 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 00:45:10.790436 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 00:45:10.795185 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 00:45:10.800601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:45:10.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.806752 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 00:45:10.807126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 00:45:10.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.814211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:45:10.814612 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:45:10.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.820046 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:45:10.820371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:45:10.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.826614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:45:10.827102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:45:10.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.835245 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 00:45:10.837412 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 00:45:10.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.842767 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:45:10.843169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:45:10.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.849211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:45:10.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.871331 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:45:10.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.879078 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 00:45:10.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.885542 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 00:45:10.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.892523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:45:10.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:10.917439 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:45:10.923811 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 00:45:10.931135 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 00:45:10.938058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 00:45:10.943008 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 00:45:10.943101 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:45:10.948756 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 00:45:10.972390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:45:10.972634 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:45:10.978595 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 00:45:10.988252 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 00:45:10.992896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:45:10.994513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 00:45:10.999834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:45:11.004448 systemd-journald[1266]: Time spent on flushing to /var/log/journal/535e1f1062e74f0e8432330a337be9f6 is 27.318ms for 1195 entries. Jan 15 00:45:11.004448 systemd-journald[1266]: System Journal (/var/log/journal/535e1f1062e74f0e8432330a337be9f6) is 8M, max 163.5M, 155.5M free. Jan 15 00:45:11.060304 systemd-journald[1266]: Received client request to flush runtime journal. Jan 15 00:45:11.004538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:45:11.026966 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 00:45:11.034013 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 00:45:11.041776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 00:45:11.048193 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 00:45:11.075039 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 00:45:11.082475 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 00:45:11.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.089278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:45:11.095736 kernel: loop1: detected capacity change from 0 to 111544 Jan 15 00:45:11.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.101240 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 00:45:11.109942 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 00:45:11.135420 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 00:45:11.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.143000 audit: BPF prog-id=18 op=LOAD Jan 15 00:45:11.143000 audit: BPF prog-id=19 op=LOAD Jan 15 00:45:11.143000 audit: BPF prog-id=20 op=LOAD Jan 15 00:45:11.145377 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 00:45:11.150738 kernel: loop2: detected capacity change from 0 to 224512 Jan 15 00:45:11.172000 audit: BPF prog-id=21 op=LOAD Jan 15 00:45:11.173987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:45:11.184001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:45:11.195000 audit: BPF prog-id=22 op=LOAD Jan 15 00:45:11.195000 audit: BPF prog-id=23 op=LOAD Jan 15 00:45:11.195000 audit: BPF prog-id=24 op=LOAD Jan 15 00:45:11.197608 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 00:45:11.204000 audit: BPF prog-id=25 op=LOAD Jan 15 00:45:11.205000 audit: BPF prog-id=26 op=LOAD Jan 15 00:45:11.205000 audit: BPF prog-id=27 op=LOAD Jan 15 00:45:11.206939 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 00:45:11.213107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 00:45:11.215394 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 00:45:11.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.235752 kernel: loop3: detected capacity change from 0 to 119256 Jan 15 00:45:11.247770 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 15 00:45:11.249121 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 15 00:45:11.273346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:45:11.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.285385 systemd-nsresourced[1320]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 00:45:11.289468 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 00:45:11.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.325861 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 00:45:11.331014 kernel: loop4: detected capacity change from 0 to 111544 Jan 15 00:45:11.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.371310 kernel: loop5: detected capacity change from 0 to 224512 Jan 15 00:45:11.400724 kernel: loop6: detected capacity change from 0 to 119256 Jan 15 00:45:11.420805 (sd-merge)[1336]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 15 00:45:11.421201 systemd-oomd[1316]: No swap; memory pressure usage will be degraded Jan 15 00:45:11.422990 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 00:45:11.426150 (sd-merge)[1336]: Merged extensions into '/usr'. Jan 15 00:45:11.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.434342 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 00:45:11.434401 systemd[1]: Reloading... Jan 15 00:45:11.487483 systemd-resolved[1318]: Positive Trust Anchors: Jan 15 00:45:11.487544 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:45:11.487552 systemd-resolved[1318]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:45:11.487599 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:45:11.499054 systemd-resolved[1318]: Defaulting to hostname 'linux'. Jan 15 00:45:11.534766 zram_generator::config[1367]: No configuration found. Jan 15 00:45:11.832960 systemd[1]: Reloading finished in 397 ms. Jan 15 00:45:11.886068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:45:11.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.892154 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 00:45:11.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.898618 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 00:45:11.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:11.912574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:45:11.940340 systemd[1]: Starting ensure-sysext.service... Jan 15 00:45:11.946371 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:45:11.951000 audit: BPF prog-id=8 op=UNLOAD Jan 15 00:45:11.952000 audit: BPF prog-id=7 op=UNLOAD Jan 15 00:45:11.953000 audit: BPF prog-id=28 op=LOAD Jan 15 00:45:11.953000 audit: BPF prog-id=29 op=LOAD Jan 15 00:45:11.956203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:45:11.978000 audit: BPF prog-id=30 op=LOAD Jan 15 00:45:11.978000 audit: BPF prog-id=25 op=UNLOAD Jan 15 00:45:11.978000 audit: BPF prog-id=31 op=LOAD Jan 15 00:45:11.979000 audit: BPF prog-id=32 op=LOAD Jan 15 00:45:11.979000 audit: BPF prog-id=26 op=UNLOAD Jan 15 00:45:11.979000 audit: BPF prog-id=27 op=UNLOAD Jan 15 00:45:11.981000 audit: BPF prog-id=33 op=LOAD Jan 15 00:45:11.981000 audit: BPF prog-id=15 op=UNLOAD Jan 15 00:45:11.981000 audit: BPF prog-id=34 op=LOAD Jan 15 00:45:11.981000 audit: BPF prog-id=35 op=LOAD Jan 15 00:45:11.981000 audit: BPF prog-id=16 op=UNLOAD Jan 15 00:45:11.981000 audit: BPF prog-id=17 op=UNLOAD Jan 15 00:45:11.996000 audit: BPF prog-id=36 op=LOAD Jan 15 00:45:11.996000 audit: BPF prog-id=21 op=UNLOAD Jan 15 00:45:11.997000 audit: BPF prog-id=37 op=LOAD Jan 15 00:45:11.997000 audit: BPF prog-id=22 op=UNLOAD Jan 15 00:45:11.997000 audit: BPF prog-id=38 op=LOAD Jan 15 00:45:11.997000 audit: BPF prog-id=39 op=LOAD Jan 15 00:45:11.997000 audit: BPF prog-id=23 op=UNLOAD Jan 15 00:45:11.997509 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 00:45:11.997559 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 00:45:11.998141 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 00:45:11.997000 audit: BPF prog-id=24 op=UNLOAD Jan 15 00:45:11.998000 audit: BPF prog-id=40 op=LOAD Jan 15 00:45:11.998000 audit: BPF prog-id=18 op=UNLOAD Jan 15 00:45:11.998000 audit: BPF prog-id=41 op=LOAD Jan 15 00:45:11.998000 audit: BPF prog-id=42 op=LOAD Jan 15 00:45:11.998000 audit: BPF prog-id=19 op=UNLOAD Jan 15 00:45:11.998000 audit: BPF prog-id=20 op=UNLOAD Jan 15 00:45:12.000475 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jan 15 00:45:12.000613 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jan 15 00:45:12.006508 systemd[1]: Reload requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Jan 15 00:45:12.006564 systemd[1]: Reloading... Jan 15 00:45:12.012799 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:45:12.012818 systemd-tmpfiles[1409]: Skipping /boot Jan 15 00:45:12.030510 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:45:12.030746 systemd-tmpfiles[1409]: Skipping /boot Jan 15 00:45:12.041445 systemd-udevd[1410]: Using default interface naming scheme 'v257'. Jan 15 00:45:12.130793 zram_generator::config[1457]: No configuration found. Jan 15 00:45:12.231739 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 00:45:12.278935 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 00:45:12.286765 kernel: ACPI: button: Power Button [PWRF] Jan 15 00:45:12.330391 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 15 00:45:12.332058 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 00:45:12.375724 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 00:45:12.511602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:45:12.517564 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 00:45:12.518041 systemd[1]: Reloading finished in 510 ms. Jan 15 00:45:12.589743 kernel: kvm_amd: TSC scaling supported Jan 15 00:45:12.589816 kernel: kvm_amd: Nested Virtualization enabled Jan 15 00:45:12.589859 kernel: kvm_amd: Nested Paging enabled Jan 15 00:45:12.591767 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 15 00:45:12.591762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:45:12.594114 kernel: kvm_amd: PMU virtualization is disabled Jan 15 00:45:12.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.610000 audit: BPF prog-id=43 op=LOAD Jan 15 00:45:12.610000 audit: BPF prog-id=30 op=UNLOAD Jan 15 00:45:12.611000 audit: BPF prog-id=44 op=LOAD Jan 15 00:45:12.611000 audit: BPF prog-id=45 op=LOAD Jan 15 00:45:12.611000 audit: BPF prog-id=31 op=UNLOAD Jan 15 00:45:12.611000 audit: BPF prog-id=32 op=UNLOAD Jan 15 00:45:12.619000 audit: BPF prog-id=46 op=LOAD Jan 15 00:45:12.644000 audit: BPF prog-id=40 op=UNLOAD Jan 15 00:45:12.644000 audit: BPF prog-id=47 op=LOAD Jan 15 00:45:12.644000 audit: BPF prog-id=48 op=LOAD Jan 15 00:45:12.644000 audit: BPF prog-id=41 op=UNLOAD Jan 15 00:45:12.644000 audit: BPF prog-id=42 op=UNLOAD Jan 15 00:45:12.646000 audit: BPF prog-id=49 op=LOAD Jan 15 00:45:12.646000 audit: BPF prog-id=50 op=LOAD Jan 15 00:45:12.646000 audit: BPF prog-id=28 op=UNLOAD Jan 15 00:45:12.646000 audit: BPF prog-id=29 op=UNLOAD Jan 15 00:45:12.647000 audit: BPF prog-id=51 op=LOAD Jan 15 00:45:12.647000 audit: BPF prog-id=37 op=UNLOAD Jan 15 00:45:12.647000 audit: BPF prog-id=52 op=LOAD Jan 15 00:45:12.647000 audit: BPF prog-id=53 op=LOAD Jan 15 00:45:12.647000 audit: BPF prog-id=38 op=UNLOAD Jan 15 00:45:12.647000 audit: BPF prog-id=39 op=UNLOAD Jan 15 00:45:12.649000 audit: BPF prog-id=54 op=LOAD Jan 15 00:45:12.649000 audit: BPF prog-id=36 op=UNLOAD Jan 15 00:45:12.650000 audit: BPF prog-id=55 op=LOAD Jan 15 00:45:12.650000 audit: BPF prog-id=33 op=UNLOAD Jan 15 00:45:12.650000 audit: BPF prog-id=56 op=LOAD Jan 15 00:45:12.650000 audit: BPF prog-id=57 op=LOAD Jan 15 00:45:12.650000 audit: BPF prog-id=34 op=UNLOAD Jan 15 00:45:12.651000 audit: BPF prog-id=35 op=UNLOAD Jan 15 00:45:12.666290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:45:12.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.673816 kernel: EDAC MC: Ver: 3.0.0 Jan 15 00:45:12.714899 systemd[1]: Finished ensure-sysext.service. Jan 15 00:45:12.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.754390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:45:12.760317 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:45:12.772007 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 00:45:12.776976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:45:12.784781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:45:12.790174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:45:12.796889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:45:12.802033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:45:12.805627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:45:12.805845 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:45:12.809063 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 00:45:12.815844 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 00:45:12.819875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:45:12.823264 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 00:45:12.829000 audit: BPF prog-id=58 op=LOAD Jan 15 00:45:12.830978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:45:12.834000 audit: BPF prog-id=59 op=LOAD Jan 15 00:45:12.837446 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 00:45:12.839371 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 00:45:12.843598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:45:12.851220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:45:12.853153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:45:12.853431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:45:12.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.869356 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:45:12.869739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:45:12.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:45:12.876233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:45:12.880000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 00:45:12.880000 audit[1558]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcad09bcb0 a2=420 a3=0 items=0 ppid=1525 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:45:12.880000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:45:12.885016 augenrules[1558]: No rules Jan 15 00:45:12.887948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:45:12.893378 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:45:12.893804 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:45:12.900184 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:45:12.900496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:45:12.904756 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 00:45:12.919175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 00:45:12.922040 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:45:12.922174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:45:12.929252 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 00:45:12.946056 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 00:45:12.947067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 00:45:13.009781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:45:13.018359 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 00:45:13.021532 systemd-networkd[1549]: lo: Link UP Jan 15 00:45:13.021540 systemd-networkd[1549]: lo: Gained carrier Jan 15 00:45:13.026025 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:45:13.026441 systemd-networkd[1549]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:45:13.026449 systemd-networkd[1549]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:45:13.031305 systemd[1]: Reached target network.target - Network. Jan 15 00:45:13.032582 systemd-networkd[1549]: eth0: Link UP Jan 15 00:45:13.034764 systemd-networkd[1549]: eth0: Gained carrier Jan 15 00:45:13.034871 systemd-networkd[1549]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:45:13.035783 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 00:45:13.042599 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 00:45:13.050064 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 00:45:13.073830 systemd-networkd[1549]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 00:45:13.074794 systemd-timesyncd[1552]: Network configuration changed, trying to establish connection. Jan 15 00:45:14.015804 systemd-resolved[1318]: Clock change detected. Flushing caches. Jan 15 00:45:14.016465 systemd-timesyncd[1552]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 15 00:45:14.016603 systemd-timesyncd[1552]: Initial clock synchronization to Thu 2026-01-15 00:45:14.015612 UTC. Jan 15 00:45:14.037240 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 00:45:14.393276 ldconfig[1537]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 00:45:14.412647 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 00:45:14.419456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 00:45:14.452024 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 00:45:14.456878 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:45:14.461657 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 00:45:14.466683 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 00:45:14.471670 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 00:45:14.476450 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 00:45:14.480317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 00:45:14.485372 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 00:45:14.494167 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 00:45:14.507442 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 00:45:14.511215 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 00:45:14.511276 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:45:14.514276 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:45:14.519904 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 00:45:14.526877 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 00:45:14.533930 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 00:45:14.539639 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 00:45:14.544642 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 00:45:14.553244 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 00:45:14.558001 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 00:45:14.564108 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 00:45:14.569179 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:45:14.573028 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:45:14.576400 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:45:14.576461 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:45:14.577963 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 00:45:14.583024 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 00:45:14.591093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 00:45:14.613232 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 00:45:14.621190 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 00:45:14.623315 jq[1596]: false Jan 15 00:45:14.626008 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 00:45:14.627925 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 00:45:14.642088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 00:45:14.648290 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 00:45:14.655969 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 00:45:14.659008 extend-filesystems[1597]: Found /dev/vda6 Jan 15 00:45:14.667162 extend-filesystems[1597]: Found /dev/vda9 Jan 15 00:45:14.665908 oslogin_cache_refresh[1598]: Refreshing passwd entry cache Jan 15 00:45:14.663975 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 00:45:14.671472 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing passwd entry cache Jan 15 00:45:14.671955 extend-filesystems[1597]: Checking size of /dev/vda9 Jan 15 00:45:14.681602 extend-filesystems[1597]: Resized partition /dev/vda9 Jan 15 00:45:14.686133 extend-filesystems[1617]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 00:45:14.708674 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 15 00:45:14.689360 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 00:45:14.708661 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 00:45:14.708904 oslogin_cache_refresh[1598]: Failure getting users, quitting Jan 15 00:45:14.709232 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting users, quitting Jan 15 00:45:14.709232 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:45:14.709232 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing group entry cache Jan 15 00:45:14.708931 oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:45:14.709004 oslogin_cache_refresh[1598]: Refreshing group entry cache Jan 15 00:45:14.709479 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 00:45:14.711243 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 00:45:14.718854 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 00:45:14.725832 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting groups, quitting Jan 15 00:45:14.725894 oslogin_cache_refresh[1598]: Failure getting groups, quitting Jan 15 00:45:14.725970 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:45:14.726032 oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:45:14.730913 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 00:45:14.731566 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 00:45:14.731991 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 00:45:14.734940 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 00:45:14.735240 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 00:45:14.746315 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 00:45:14.747987 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 00:45:14.755894 jq[1622]: true Jan 15 00:45:14.756959 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 00:45:14.757319 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 00:45:14.776640 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 15 00:45:14.778321 update_engine[1620]: I20260115 00:45:14.778203 1620 main.cc:92] Flatcar Update Engine starting Jan 15 00:45:14.806823 jq[1629]: true Jan 15 00:45:14.826052 extend-filesystems[1617]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 00:45:14.826052 extend-filesystems[1617]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 15 00:45:14.826052 extend-filesystems[1617]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 15 00:45:14.837824 extend-filesystems[1597]: Resized filesystem in /dev/vda9 Jan 15 00:45:14.827312 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 00:45:14.856598 tar[1627]: linux-amd64/LICENSE Jan 15 00:45:14.856598 tar[1627]: linux-amd64/helm Jan 15 00:45:14.828887 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 00:45:14.863881 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 00:45:14.869632 dbus-daemon[1594]: [system] SELinux support is enabled Jan 15 00:45:14.870326 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 00:45:14.878642 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 00:45:14.888219 update_engine[1620]: I20260115 00:45:14.884582 1620 update_check_scheduler.cc:74] Next update check in 2m44s Jan 15 00:45:14.888271 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Jan 15 00:45:14.878685 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 00:45:14.883730 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 00:45:14.883787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 00:45:14.888819 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 00:45:14.908471 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 00:45:14.911858 systemd[1]: Started update-engine.service - Update Engine. Jan 15 00:45:14.917974 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 00:45:14.938232 systemd-logind[1618]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 00:45:14.938293 systemd-logind[1618]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 00:45:14.938947 systemd-logind[1618]: New seat seat0. Jan 15 00:45:14.942405 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 00:45:15.023592 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 00:45:15.142569 containerd[1634]: time="2026-01-15T00:45:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 00:45:15.144138 containerd[1634]: time="2026-01-15T00:45:15.144083771Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 00:45:15.159969 containerd[1634]: time="2026-01-15T00:45:15.159784802Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.261µs" Jan 15 00:45:15.159969 containerd[1634]: time="2026-01-15T00:45:15.159858921Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 00:45:15.159969 containerd[1634]: time="2026-01-15T00:45:15.159911429Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 00:45:15.159969 containerd[1634]: time="2026-01-15T00:45:15.159927409Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 00:45:15.160149 containerd[1634]: time="2026-01-15T00:45:15.160126260Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 00:45:15.160149 containerd[1634]: time="2026-01-15T00:45:15.160146699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160268 containerd[1634]: time="2026-01-15T00:45:15.160225696Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160268 containerd[1634]: time="2026-01-15T00:45:15.160243348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160664 containerd[1634]: time="2026-01-15T00:45:15.160580238Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160664 containerd[1634]: time="2026-01-15T00:45:15.160618800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160664 containerd[1634]: time="2026-01-15T00:45:15.160630322Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160664 containerd[1634]: time="2026-01-15T00:45:15.160637215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160979 containerd[1634]: time="2026-01-15T00:45:15.160882763Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.160979 containerd[1634]: time="2026-01-15T00:45:15.160941633Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 00:45:15.161146 containerd[1634]: time="2026-01-15T00:45:15.161078158Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.161495 containerd[1634]: time="2026-01-15T00:45:15.161378238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.161495 containerd[1634]: time="2026-01-15T00:45:15.161456474Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:45:15.161495 containerd[1634]: time="2026-01-15T00:45:15.161470360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 00:45:15.161647 containerd[1634]: time="2026-01-15T00:45:15.161594803Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 00:45:15.161883 containerd[1634]: time="2026-01-15T00:45:15.161831354Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 00:45:15.161989 containerd[1634]: time="2026-01-15T00:45:15.161937502Z" level=info msg="metadata content store policy set" policy=shared Jan 15 00:45:15.167804 containerd[1634]: time="2026-01-15T00:45:15.167656492Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 00:45:15.167804 containerd[1634]: time="2026-01-15T00:45:15.167790492Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:45:15.167927 containerd[1634]: time="2026-01-15T00:45:15.167890148Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:45:15.167927 containerd[1634]: time="2026-01-15T00:45:15.167913161Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.167930163Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.167946103Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.167959878Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.167971400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.167994193Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.168013629Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.168030019Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.168046710Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.168060116Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 00:45:15.168117 containerd[1634]: time="2026-01-15T00:45:15.168074602Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168199696Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168221627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168237426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168256883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168269326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168285426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168300314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 00:45:15.168316 containerd[1634]: time="2026-01-15T00:45:15.168314290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168327655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168341230Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168366718Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168391885Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168452658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168467065Z" level=info msg="Start snapshots syncer" Jan 15 00:45:15.168985 containerd[1634]: time="2026-01-15T00:45:15.168584474Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 00:45:15.169302 containerd[1634]: time="2026-01-15T00:45:15.168898731Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 00:45:15.169302 containerd[1634]: time="2026-01-15T00:45:15.168950808Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169052108Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169175368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169205965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169220442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169232715Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169245839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169260457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169273501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169302525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169320479Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169349543Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169363048Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:45:15.169578 containerd[1634]: time="2026-01-15T00:45:15.169373397Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169391261Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169401501Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169419514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169432328Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169452425Z" level=info msg="runtime interface created" Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169464628Z" level=info msg="created NRI interface" Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169474807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169487391Z" level=info msg="Connect containerd service" Jan 15 00:45:15.169900 containerd[1634]: time="2026-01-15T00:45:15.169599099Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 00:45:15.170542 containerd[1634]: time="2026-01-15T00:45:15.170442775Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 00:45:15.241583 tar[1627]: linux-amd64/README.md Jan 15 00:45:15.259239 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 00:45:15.289177 containerd[1634]: time="2026-01-15T00:45:15.289119519Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 00:45:15.289719 containerd[1634]: time="2026-01-15T00:45:15.289380276Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 00:45:15.289719 containerd[1634]: time="2026-01-15T00:45:15.289597982Z" level=info msg="Start subscribing containerd event" Jan 15 00:45:15.289719 containerd[1634]: time="2026-01-15T00:45:15.289654488Z" level=info msg="Start recovering state" Jan 15 00:45:15.289965 containerd[1634]: time="2026-01-15T00:45:15.289890068Z" level=info msg="Start event monitor" Jan 15 00:45:15.289965 containerd[1634]: time="2026-01-15T00:45:15.289913542Z" level=info msg="Start cni network conf syncer for default" Jan 15 00:45:15.289965 containerd[1634]: time="2026-01-15T00:45:15.289927127Z" level=info msg="Start streaming server" Jan 15 00:45:15.289965 containerd[1634]: time="2026-01-15T00:45:15.289949779Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 00:45:15.289965 containerd[1634]: time="2026-01-15T00:45:15.289963685Z" level=info msg="runtime interface starting up..." Jan 15 00:45:15.290132 containerd[1634]: time="2026-01-15T00:45:15.289976890Z" level=info msg="starting plugins..." Jan 15 00:45:15.290132 containerd[1634]: time="2026-01-15T00:45:15.290004061Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 00:45:15.290260 containerd[1634]: time="2026-01-15T00:45:15.290215956Z" level=info msg="containerd successfully booted in 0.148281s" Jan 15 00:45:15.290716 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 00:45:15.559783 sshd_keygen[1624]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 00:45:15.586564 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 00:45:15.592312 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 00:45:15.596702 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:44710.service - OpenSSH per-connection server daemon (10.0.0.1:44710). Jan 15 00:45:15.615427 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 00:45:15.616061 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 00:45:15.624928 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 00:45:15.652404 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 00:45:15.659349 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 00:45:15.665480 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 00:45:15.670436 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 00:45:15.701477 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 44710 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:15.703393 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:15.711381 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 00:45:15.715611 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 00:45:15.724164 systemd-logind[1618]: New session 1 of user core. Jan 15 00:45:15.739862 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 00:45:15.746440 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 00:45:15.762291 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 00:45:15.765917 systemd-logind[1618]: New session c1 of user core. Jan 15 00:45:15.925474 systemd[1718]: Queued start job for default target default.target. Jan 15 00:45:15.947396 systemd[1718]: Created slice app.slice - User Application Slice. Jan 15 00:45:15.947471 systemd[1718]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 00:45:15.947493 systemd[1718]: Reached target paths.target - Paths. Jan 15 00:45:15.947661 systemd[1718]: Reached target timers.target - Timers. Jan 15 00:45:15.949860 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 00:45:15.951063 systemd[1718]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 00:45:15.963835 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 00:45:15.964281 systemd[1718]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 00:45:15.964431 systemd[1718]: Reached target sockets.target - Sockets. Jan 15 00:45:15.964554 systemd[1718]: Reached target basic.target - Basic System. Jan 15 00:45:15.964608 systemd[1718]: Reached target default.target - Main User Target. Jan 15 00:45:15.964643 systemd[1718]: Startup finished in 191ms. Jan 15 00:45:15.964974 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 00:45:15.978809 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 00:45:15.987665 systemd-networkd[1549]: eth0: Gained IPv6LL Jan 15 00:45:16.004093 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 00:45:16.009332 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 00:45:16.016073 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 15 00:45:16.022954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:16.024731 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 00:45:16.041240 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:44724.service - OpenSSH per-connection server daemon (10.0.0.1:44724). Jan 15 00:45:16.077204 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 00:45:16.093979 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 15 00:45:16.094363 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 15 00:45:16.098857 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 00:45:16.105190 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 44724 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:16.107346 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:16.115411 systemd-logind[1618]: New session 2 of user core. Jan 15 00:45:16.121945 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 00:45:16.146716 sshd[1752]: Connection closed by 10.0.0.1 port 44724 Jan 15 00:45:16.147333 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 15 00:45:16.163556 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:44724.service: Deactivated successfully. Jan 15 00:45:16.166052 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 00:45:16.167145 systemd-logind[1618]: Session 2 logged out. Waiting for processes to exit. Jan 15 00:45:16.171054 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:44728.service - OpenSSH per-connection server daemon (10.0.0.1:44728). Jan 15 00:45:16.176790 systemd-logind[1618]: Removed session 2. Jan 15 00:45:16.244997 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 44728 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:16.246785 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:16.253979 systemd-logind[1618]: New session 3 of user core. Jan 15 00:45:16.270969 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 00:45:16.291710 sshd[1761]: Connection closed by 10.0.0.1 port 44728 Jan 15 00:45:16.292119 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 15 00:45:16.297055 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:44728.service: Deactivated successfully. Jan 15 00:45:16.299219 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 00:45:16.300458 systemd-logind[1618]: Session 3 logged out. Waiting for processes to exit. Jan 15 00:45:16.302303 systemd-logind[1618]: Removed session 3. Jan 15 00:45:17.746587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:17.752650 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 00:45:17.758054 systemd[1]: Startup finished in 3.619s (kernel) + 6.989s (initrd) + 7.944s (userspace) = 18.553s. Jan 15 00:45:17.762984 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:45:18.640573 kubelet[1771]: E0115 00:45:18.640423 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:45:18.644040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:45:18.644320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:45:18.645039 systemd[1]: kubelet.service: Consumed 2.136s CPU time, 264.6M memory peak. Jan 15 00:45:26.310960 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:33310.service - OpenSSH per-connection server daemon (10.0.0.1:33310). Jan 15 00:45:26.397587 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 33310 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:26.399867 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:26.406744 systemd-logind[1618]: New session 4 of user core. Jan 15 00:45:26.416831 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 00:45:26.434185 sshd[1787]: Connection closed by 10.0.0.1 port 33310 Jan 15 00:45:26.434662 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 15 00:45:26.444081 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:33310.service: Deactivated successfully. Jan 15 00:45:26.446351 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 00:45:26.447638 systemd-logind[1618]: Session 4 logged out. Waiting for processes to exit. Jan 15 00:45:26.450882 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:33316.service - OpenSSH per-connection server daemon (10.0.0.1:33316). Jan 15 00:45:26.452122 systemd-logind[1618]: Removed session 4. Jan 15 00:45:26.519651 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 33316 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:26.521299 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:26.527599 systemd-logind[1618]: New session 5 of user core. Jan 15 00:45:26.541858 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 00:45:26.554004 sshd[1796]: Connection closed by 10.0.0.1 port 33316 Jan 15 00:45:26.554362 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 15 00:45:26.572808 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:33316.service: Deactivated successfully. Jan 15 00:45:26.576012 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 00:45:26.577476 systemd-logind[1618]: Session 5 logged out. Waiting for processes to exit. Jan 15 00:45:26.581852 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:33332.service - OpenSSH per-connection server daemon (10.0.0.1:33332). Jan 15 00:45:26.582703 systemd-logind[1618]: Removed session 5. Jan 15 00:45:26.650580 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 33332 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:26.652260 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:26.658922 systemd-logind[1618]: New session 6 of user core. Jan 15 00:45:26.669751 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 00:45:26.687801 sshd[1806]: Connection closed by 10.0.0.1 port 33332 Jan 15 00:45:26.688192 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 15 00:45:26.697136 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:33332.service: Deactivated successfully. Jan 15 00:45:26.699176 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 00:45:26.700308 systemd-logind[1618]: Session 6 logged out. Waiting for processes to exit. Jan 15 00:45:26.703055 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:33340.service - OpenSSH per-connection server daemon (10.0.0.1:33340). Jan 15 00:45:26.703944 systemd-logind[1618]: Removed session 6. Jan 15 00:45:26.773373 sshd[1812]: Accepted publickey for core from 10.0.0.1 port 33340 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:45:26.774911 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:45:26.780885 systemd-logind[1618]: New session 7 of user core. Jan 15 00:45:26.794889 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 00:45:26.818069 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 00:45:26.818465 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:45:28.066267 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 00:45:28.095424 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 00:45:28.435198 dockerd[1836]: time="2026-01-15T00:45:28.435120501Z" level=info msg="Starting up" Jan 15 00:45:28.435987 dockerd[1836]: time="2026-01-15T00:45:28.435945411Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 00:45:28.451548 dockerd[1836]: time="2026-01-15T00:45:28.451416822Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 00:45:28.530413 dockerd[1836]: time="2026-01-15T00:45:28.530326446Z" level=info msg="Loading containers: start." Jan 15 00:45:28.572597 kernel: Initializing XFRM netlink socket Jan 15 00:45:28.663254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 00:45:28.665685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:28.933632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:28.943088 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:45:29.041291 kubelet[1973]: E0115 00:45:29.040653 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:45:29.048171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:45:29.048396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:45:29.049059 systemd[1]: kubelet.service: Consumed 292ms CPU time, 111.3M memory peak. Jan 15 00:45:29.106792 systemd-networkd[1549]: docker0: Link UP Jan 15 00:45:29.112800 dockerd[1836]: time="2026-01-15T00:45:29.112695305Z" level=info msg="Loading containers: done." Jan 15 00:45:29.131439 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2436298825-merged.mount: Deactivated successfully. Jan 15 00:45:29.136886 dockerd[1836]: time="2026-01-15T00:45:29.136715289Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 00:45:29.136886 dockerd[1836]: time="2026-01-15T00:45:29.136861672Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 00:45:29.137148 dockerd[1836]: time="2026-01-15T00:45:29.137083366Z" level=info msg="Initializing buildkit" Jan 15 00:45:29.176486 dockerd[1836]: time="2026-01-15T00:45:29.176418955Z" level=info msg="Completed buildkit initialization" Jan 15 00:45:29.182619 dockerd[1836]: time="2026-01-15T00:45:29.182578547Z" level=info msg="Daemon has completed initialization" Jan 15 00:45:29.182733 dockerd[1836]: time="2026-01-15T00:45:29.182659358Z" level=info msg="API listen on /run/docker.sock" Jan 15 00:45:29.182991 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 00:45:29.959180 containerd[1634]: time="2026-01-15T00:45:29.958858063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 00:45:30.519803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701493379.mount: Deactivated successfully. Jan 15 00:45:31.503848 containerd[1634]: time="2026-01-15T00:45:31.503718657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:31.504732 containerd[1634]: time="2026-01-15T00:45:31.504653872Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 15 00:45:31.506259 containerd[1634]: time="2026-01-15T00:45:31.506150414Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:31.508975 containerd[1634]: time="2026-01-15T00:45:31.508910292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:31.510020 containerd[1634]: time="2026-01-15T00:45:31.509938738Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.551044468s" Jan 15 00:45:31.510020 containerd[1634]: time="2026-01-15T00:45:31.509989323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 15 00:45:31.510832 containerd[1634]: time="2026-01-15T00:45:31.510677708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 00:45:32.838891 containerd[1634]: time="2026-01-15T00:45:32.838823216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:32.840297 containerd[1634]: time="2026-01-15T00:45:32.840211547Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 15 00:45:32.841829 containerd[1634]: time="2026-01-15T00:45:32.841704804Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:32.845214 containerd[1634]: time="2026-01-15T00:45:32.845175696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:32.845912 containerd[1634]: time="2026-01-15T00:45:32.845857632Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.335132667s" Jan 15 00:45:32.845912 containerd[1634]: time="2026-01-15T00:45:32.845903418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 15 00:45:32.846654 containerd[1634]: time="2026-01-15T00:45:32.846623192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 00:45:34.171805 containerd[1634]: time="2026-01-15T00:45:34.171609037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:34.172885 containerd[1634]: time="2026-01-15T00:45:34.172724522Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 15 00:45:34.174573 containerd[1634]: time="2026-01-15T00:45:34.174401864Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:34.178036 containerd[1634]: time="2026-01-15T00:45:34.177955428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:34.179103 containerd[1634]: time="2026-01-15T00:45:34.179022871Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.332372679s" Jan 15 00:45:34.179103 containerd[1634]: time="2026-01-15T00:45:34.179067965Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 15 00:45:34.180986 containerd[1634]: time="2026-01-15T00:45:34.180718508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 00:45:35.095474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207236271.mount: Deactivated successfully. Jan 15 00:45:35.619342 containerd[1634]: time="2026-01-15T00:45:35.619242605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:35.620434 containerd[1634]: time="2026-01-15T00:45:35.620381202Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 15 00:45:35.621571 containerd[1634]: time="2026-01-15T00:45:35.621440089Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:35.624741 containerd[1634]: time="2026-01-15T00:45:35.624588298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:35.625533 containerd[1634]: time="2026-01-15T00:45:35.625403245Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.444636147s" Jan 15 00:45:35.625533 containerd[1634]: time="2026-01-15T00:45:35.625467445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 15 00:45:35.626316 containerd[1634]: time="2026-01-15T00:45:35.626223131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 00:45:36.285066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395425849.mount: Deactivated successfully. Jan 15 00:45:37.111428 containerd[1634]: time="2026-01-15T00:45:37.111209987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:37.112294 containerd[1634]: time="2026-01-15T00:45:37.112256861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 15 00:45:37.113910 containerd[1634]: time="2026-01-15T00:45:37.113836952Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:37.117625 containerd[1634]: time="2026-01-15T00:45:37.117572786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:37.118686 containerd[1634]: time="2026-01-15T00:45:37.118602282Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.49231409s" Jan 15 00:45:37.118686 containerd[1634]: time="2026-01-15T00:45:37.118675589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 15 00:45:37.119361 containerd[1634]: time="2026-01-15T00:45:37.119304283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 00:45:37.527363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254514842.mount: Deactivated successfully. Jan 15 00:45:37.535471 containerd[1634]: time="2026-01-15T00:45:37.535423918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:45:37.536880 containerd[1634]: time="2026-01-15T00:45:37.536826524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 00:45:37.538344 containerd[1634]: time="2026-01-15T00:45:37.538280408Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:45:37.541435 containerd[1634]: time="2026-01-15T00:45:37.541353827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:45:37.541968 containerd[1634]: time="2026-01-15T00:45:37.541934934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 422.600885ms" Jan 15 00:45:37.542046 containerd[1634]: time="2026-01-15T00:45:37.541974207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 00:45:37.542804 containerd[1634]: time="2026-01-15T00:45:37.542575290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 00:45:38.035674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812979555.mount: Deactivated successfully. Jan 15 00:45:39.163616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 00:45:39.167143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:39.370387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:39.375606 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:45:39.457671 kubelet[2263]: E0115 00:45:39.457165 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:45:39.461185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:45:39.461464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:45:39.462255 systemd[1]: kubelet.service: Consumed 259ms CPU time, 111.6M memory peak. Jan 15 00:45:40.288990 containerd[1634]: time="2026-01-15T00:45:40.288884219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:40.290042 containerd[1634]: time="2026-01-15T00:45:40.290011208Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 15 00:45:40.292272 containerd[1634]: time="2026-01-15T00:45:40.292122946Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:40.296238 containerd[1634]: time="2026-01-15T00:45:40.296163182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:40.297022 containerd[1634]: time="2026-01-15T00:45:40.296937858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.754327561s" Jan 15 00:45:40.297022 containerd[1634]: time="2026-01-15T00:45:40.296990686Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 15 00:45:42.812162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:42.812476 systemd[1]: kubelet.service: Consumed 259ms CPU time, 111.6M memory peak. Jan 15 00:45:42.815596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:42.850181 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Jan 15 00:45:42.850217 systemd[1]: Reloading... Jan 15 00:45:42.946613 zram_generator::config[2350]: No configuration found. Jan 15 00:45:43.237050 systemd[1]: Reloading finished in 386 ms. Jan 15 00:45:43.320185 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 00:45:43.320310 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 00:45:43.320745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:43.320889 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.4M memory peak. Jan 15 00:45:43.322762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:43.495435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:43.520901 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:45:43.567482 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:45:43.567482 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:45:43.567482 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:45:43.567924 kubelet[2399]: I0115 00:45:43.567554 2399 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:45:43.868907 kubelet[2399]: I0115 00:45:43.868710 2399 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:45:43.868907 kubelet[2399]: I0115 00:45:43.868809 2399 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:45:43.869221 kubelet[2399]: I0115 00:45:43.869159 2399 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:45:43.893597 kubelet[2399]: E0115 00:45:43.892181 2399 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:43.893854 kubelet[2399]: I0115 00:45:43.893604 2399 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:45:43.900891 kubelet[2399]: I0115 00:45:43.900764 2399 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:45:43.907367 kubelet[2399]: I0115 00:45:43.907316 2399 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:45:43.908230 kubelet[2399]: I0115 00:45:43.908133 2399 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:45:43.908368 kubelet[2399]: I0115 00:45:43.908184 2399 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:45:43.908368 kubelet[2399]: I0115 00:45:43.908362 2399 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:45:43.908581 kubelet[2399]: I0115 00:45:43.908371 2399 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:45:43.908581 kubelet[2399]: I0115 00:45:43.908485 2399 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:45:43.911328 kubelet[2399]: I0115 00:45:43.911232 2399 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:45:43.911328 kubelet[2399]: I0115 00:45:43.911278 2399 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:45:43.911328 kubelet[2399]: I0115 00:45:43.911302 2399 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:45:43.911328 kubelet[2399]: I0115 00:45:43.911313 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:45:43.915080 kubelet[2399]: I0115 00:45:43.914758 2399 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:45:43.915546 kubelet[2399]: I0115 00:45:43.915399 2399 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:45:43.915689 kubelet[2399]: W0115 00:45:43.915558 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 00:45:43.917842 kubelet[2399]: W0115 00:45:43.916189 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 15 00:45:43.917842 kubelet[2399]: E0115 00:45:43.916232 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:43.917842 kubelet[2399]: W0115 00:45:43.917473 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 15 00:45:43.917842 kubelet[2399]: E0115 00:45:43.917600 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:43.918083 kubelet[2399]: I0115 00:45:43.918052 2399 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:45:43.918128 kubelet[2399]: I0115 00:45:43.918119 2399 server.go:1287] "Started kubelet" Jan 15 00:45:43.919371 kubelet[2399]: I0115 00:45:43.919325 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:45:43.923291 kubelet[2399]: I0115 00:45:43.922296 2399 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:45:43.923291 kubelet[2399]: I0115 00:45:43.922266 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:45:43.923291 kubelet[2399]: I0115 00:45:43.922616 2399 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:45:43.924575 kubelet[2399]: I0115 00:45:43.924131 2399 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:45:43.925803 kubelet[2399]: I0115 00:45:43.925726 2399 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:45:43.926159 kubelet[2399]: E0115 00:45:43.926142 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:45:43.926276 kubelet[2399]: I0115 00:45:43.926264 2399 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:45:43.926356 kubelet[2399]: I0115 00:45:43.926347 2399 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:45:43.926479 kubelet[2399]: E0115 00:45:43.924285 2399 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ac0f0be8f2da7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 00:45:43.918079399 +0000 UTC m=+0.391875148,LastTimestamp:2026-01-15 00:45:43.918079399 +0000 UTC m=+0.391875148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 00:45:43.927316 kubelet[2399]: W0115 00:45:43.927196 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 15 00:45:43.927316 kubelet[2399]: E0115 00:45:43.927259 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:43.927383 kubelet[2399]: E0115 00:45:43.927318 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" Jan 15 00:45:43.928744 kubelet[2399]: I0115 00:45:43.928685 2399 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:45:43.928842 kubelet[2399]: I0115 00:45:43.928817 2399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:45:43.929356 kubelet[2399]: I0115 00:45:43.929339 2399 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:45:43.930598 kubelet[2399]: E0115 00:45:43.930478 2399 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:45:43.930765 kubelet[2399]: I0115 00:45:43.930735 2399 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:45:43.949372 kubelet[2399]: I0115 00:45:43.949135 2399 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:45:43.949372 kubelet[2399]: I0115 00:45:43.949149 2399 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:45:43.949372 kubelet[2399]: I0115 00:45:43.949164 2399 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:45:43.952424 kubelet[2399]: I0115 00:45:43.952369 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:45:43.954837 kubelet[2399]: I0115 00:45:43.954770 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:45:43.954837 kubelet[2399]: I0115 00:45:43.954837 2399 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:45:43.954908 kubelet[2399]: I0115 00:45:43.954854 2399 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:45:43.954908 kubelet[2399]: I0115 00:45:43.954886 2399 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:45:43.954951 kubelet[2399]: E0115 00:45:43.954936 2399 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:45:44.017466 kubelet[2399]: W0115 00:45:44.017370 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 15 00:45:44.017632 kubelet[2399]: E0115 00:45:44.017553 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:44.017632 kubelet[2399]: I0115 00:45:44.017417 2399 policy_none.go:49] "None policy: Start" Jan 15 00:45:44.017686 kubelet[2399]: I0115 00:45:44.017679 2399 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:45:44.017714 kubelet[2399]: I0115 00:45:44.017702 2399 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:45:44.026400 kubelet[2399]: E0115 00:45:44.026352 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:45:44.026753 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 00:45:44.045675 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 00:45:44.051282 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 00:45:44.055489 kubelet[2399]: E0115 00:45:44.055432 2399 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 00:45:44.066048 kubelet[2399]: I0115 00:45:44.065977 2399 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:45:44.066270 kubelet[2399]: I0115 00:45:44.066232 2399 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:45:44.066996 kubelet[2399]: I0115 00:45:44.066250 2399 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:45:44.066996 kubelet[2399]: I0115 00:45:44.066667 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:45:44.068897 kubelet[2399]: E0115 00:45:44.068876 2399 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:45:44.068948 kubelet[2399]: E0115 00:45:44.068923 2399 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 00:45:44.129319 kubelet[2399]: E0115 00:45:44.129076 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" Jan 15 00:45:44.168974 kubelet[2399]: I0115 00:45:44.168817 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:45:44.169334 kubelet[2399]: E0115 00:45:44.169233 2399 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Jan 15 00:45:44.269321 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 15 00:45:44.292979 kubelet[2399]: E0115 00:45:44.292893 2399 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:45:44.297417 systemd[1]: Created slice kubepods-burstable-pod00407fdca98902c035255c350cd77970.slice - libcontainer container kubepods-burstable-pod00407fdca98902c035255c350cd77970.slice. Jan 15 00:45:44.310440 kubelet[2399]: E0115 00:45:44.310373 2399 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:45:44.315199 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 15 00:45:44.318271 kubelet[2399]: E0115 00:45:44.318185 2399 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:45:44.327986 kubelet[2399]: I0115 00:45:44.327882 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:44.327986 kubelet[2399]: I0115 00:45:44.327924 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00407fdca98902c035255c350cd77970-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00407fdca98902c035255c350cd77970\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:44.327986 kubelet[2399]: I0115 00:45:44.327942 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00407fdca98902c035255c350cd77970-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00407fdca98902c035255c350cd77970\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:44.327986 kubelet[2399]: I0115 00:45:44.327957 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:44.327986 kubelet[2399]: I0115 00:45:44.327970 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00407fdca98902c035255c350cd77970-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00407fdca98902c035255c350cd77970\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:44.328122 kubelet[2399]: I0115 00:45:44.327982 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:44.328122 kubelet[2399]: I0115 00:45:44.327996 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:44.328122 kubelet[2399]: I0115 00:45:44.328009 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:44.328122 kubelet[2399]: I0115 00:45:44.328024 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:44.371478 kubelet[2399]: I0115 00:45:44.371421 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:45:44.372048 kubelet[2399]: E0115 00:45:44.371975 2399 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Jan 15 00:45:44.530111 kubelet[2399]: E0115 00:45:44.530037 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" Jan 15 00:45:44.594365 kubelet[2399]: E0115 00:45:44.594232 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:44.597437 containerd[1634]: time="2026-01-15T00:45:44.596823174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 15 00:45:44.611228 kubelet[2399]: E0115 00:45:44.611163 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:44.611939 containerd[1634]: time="2026-01-15T00:45:44.611889345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00407fdca98902c035255c350cd77970,Namespace:kube-system,Attempt:0,}" Jan 15 00:45:44.619738 kubelet[2399]: E0115 00:45:44.619618 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:44.620145 containerd[1634]: time="2026-01-15T00:45:44.620110496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 15 00:45:44.624373 containerd[1634]: time="2026-01-15T00:45:44.624243965Z" level=info msg="connecting to shim bfe4cc5087579d479017cd4ff75066293d9f515bf2abf48011173b0aa0b1db67" address="unix:///run/containerd/s/5e10f2351b07ef49e5beca623029c405fc6d9d0cbb6795bf3d8010372acbbfdd" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:45:44.649002 containerd[1634]: time="2026-01-15T00:45:44.648919614Z" level=info msg="connecting to shim ab91db89eb0eeab0ff4821732a2e1952d7a92fb549556b49056dba0d50f83771" address="unix:///run/containerd/s/455f6d68cd0c7b0f1e0d7ae2a705e9fa4bfa41272f85689404fd152b02cc1b39" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:45:44.663557 containerd[1634]: time="2026-01-15T00:45:44.663159813Z" level=info msg="connecting to shim c70ac2e36e56e46dedd5e9e325b231b7a050d1c9893b6042d5ef962648f4cd6b" address="unix:///run/containerd/s/72cb3d5bd6126a7f31fc5f2dc54c8ae4e7bd39a86ea767812b46c465edbeabc4" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:45:44.679871 systemd[1]: Started cri-containerd-bfe4cc5087579d479017cd4ff75066293d9f515bf2abf48011173b0aa0b1db67.scope - libcontainer container bfe4cc5087579d479017cd4ff75066293d9f515bf2abf48011173b0aa0b1db67. Jan 15 00:45:44.692859 systemd[1]: Started cri-containerd-ab91db89eb0eeab0ff4821732a2e1952d7a92fb549556b49056dba0d50f83771.scope - libcontainer container ab91db89eb0eeab0ff4821732a2e1952d7a92fb549556b49056dba0d50f83771. Jan 15 00:45:44.704828 systemd[1]: Started cri-containerd-c70ac2e36e56e46dedd5e9e325b231b7a050d1c9893b6042d5ef962648f4cd6b.scope - libcontainer container c70ac2e36e56e46dedd5e9e325b231b7a050d1c9893b6042d5ef962648f4cd6b. Jan 15 00:45:44.758617 containerd[1634]: time="2026-01-15T00:45:44.758308513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00407fdca98902c035255c350cd77970,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab91db89eb0eeab0ff4821732a2e1952d7a92fb549556b49056dba0d50f83771\"" Jan 15 00:45:44.761319 kubelet[2399]: E0115 00:45:44.761299 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:44.770012 containerd[1634]: time="2026-01-15T00:45:44.769126545Z" level=info msg="CreateContainer within sandbox \"ab91db89eb0eeab0ff4821732a2e1952d7a92fb549556b49056dba0d50f83771\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 00:45:44.774700 kubelet[2399]: I0115 00:45:44.774609 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:45:44.775437 kubelet[2399]: E0115 00:45:44.775338 2399 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Jan 15 00:45:44.780077 containerd[1634]: time="2026-01-15T00:45:44.779941016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe4cc5087579d479017cd4ff75066293d9f515bf2abf48011173b0aa0b1db67\"" Jan 15 00:45:44.781663 kubelet[2399]: E0115 00:45:44.781484 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:44.784466 containerd[1634]: time="2026-01-15T00:45:44.784369333Z" level=info msg="CreateContainer within sandbox \"bfe4cc5087579d479017cd4ff75066293d9f515bf2abf48011173b0aa0b1db67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 00:45:44.784597 containerd[1634]: time="2026-01-15T00:45:44.784369886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c70ac2e36e56e46dedd5e9e325b231b7a050d1c9893b6042d5ef962648f4cd6b\"" Jan 15 00:45:44.785249 kubelet[2399]: E0115 00:45:44.785182 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:44.788431 containerd[1634]: time="2026-01-15T00:45:44.788288942Z" level=info msg="CreateContainer within sandbox \"c70ac2e36e56e46dedd5e9e325b231b7a050d1c9893b6042d5ef962648f4cd6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 00:45:44.788942 containerd[1634]: time="2026-01-15T00:45:44.788892230Z" level=info msg="Container c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:45:44.814829 kubelet[2399]: W0115 00:45:44.814641 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 15 00:45:44.814829 kubelet[2399]: E0115 00:45:44.814727 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:44.848413 kubelet[2399]: W0115 00:45:44.848259 2399 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 15 00:45:44.848413 kubelet[2399]: E0115 00:45:44.848375 2399 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:45:44.856953 containerd[1634]: time="2026-01-15T00:45:44.856863625Z" level=info msg="Container 3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:45:44.860975 containerd[1634]: time="2026-01-15T00:45:44.860924284Z" level=info msg="Container 885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:45:44.861484 containerd[1634]: time="2026-01-15T00:45:44.861429300Z" level=info msg="CreateContainer within sandbox \"ab91db89eb0eeab0ff4821732a2e1952d7a92fb549556b49056dba0d50f83771\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225\"" Jan 15 00:45:44.862357 containerd[1634]: time="2026-01-15T00:45:44.862308209Z" level=info msg="StartContainer for \"c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225\"" Jan 15 00:45:44.864983 containerd[1634]: time="2026-01-15T00:45:44.864900011Z" level=info msg="connecting to shim c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225" address="unix:///run/containerd/s/455f6d68cd0c7b0f1e0d7ae2a705e9fa4bfa41272f85689404fd152b02cc1b39" protocol=ttrpc version=3 Jan 15 00:45:44.865372 containerd[1634]: time="2026-01-15T00:45:44.864934704Z" level=info msg="CreateContainer within sandbox \"bfe4cc5087579d479017cd4ff75066293d9f515bf2abf48011173b0aa0b1db67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717\"" Jan 15 00:45:44.865942 containerd[1634]: time="2026-01-15T00:45:44.865883728Z" level=info msg="StartContainer for \"3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717\"" Jan 15 00:45:44.867185 containerd[1634]: time="2026-01-15T00:45:44.867164961Z" level=info msg="connecting to shim 3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717" address="unix:///run/containerd/s/5e10f2351b07ef49e5beca623029c405fc6d9d0cbb6795bf3d8010372acbbfdd" protocol=ttrpc version=3 Jan 15 00:45:44.870176 containerd[1634]: time="2026-01-15T00:45:44.870082218Z" level=info msg="CreateContainer within sandbox \"c70ac2e36e56e46dedd5e9e325b231b7a050d1c9893b6042d5ef962648f4cd6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3\"" Jan 15 00:45:44.871012 containerd[1634]: time="2026-01-15T00:45:44.870962051Z" level=info msg="StartContainer for \"885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3\"" Jan 15 00:45:44.873276 containerd[1634]: time="2026-01-15T00:45:44.873153684Z" level=info msg="connecting to shim 885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3" address="unix:///run/containerd/s/72cb3d5bd6126a7f31fc5f2dc54c8ae4e7bd39a86ea767812b46c465edbeabc4" protocol=ttrpc version=3 Jan 15 00:45:44.901723 systemd[1]: Started cri-containerd-3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717.scope - libcontainer container 3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717. Jan 15 00:45:44.903069 systemd[1]: Started cri-containerd-c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225.scope - libcontainer container c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225. Jan 15 00:45:44.908154 systemd[1]: Started cri-containerd-885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3.scope - libcontainer container 885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3. Jan 15 00:45:44.983216 containerd[1634]: time="2026-01-15T00:45:44.983126172Z" level=info msg="StartContainer for \"c8b10ee29a065e2a1d0a0c864fbcc986119db4eb7ef45c5d7d726ed6eeca7225\" returns successfully" Jan 15 00:45:44.989454 containerd[1634]: time="2026-01-15T00:45:44.988843528Z" level=info msg="StartContainer for \"3c6e957c468185092d2f4b0927b7b302b6e11ab18d288b004dd0107b95f24717\" returns successfully" Jan 15 00:45:44.999237 containerd[1634]: time="2026-01-15T00:45:44.999186403Z" level=info msg="StartContainer for \"885804cb3eb420b653c1e861870f39e81f5bed4741ee16fc0627d5d59283a7e3\" returns successfully" Jan 15 00:45:45.577577 kubelet[2399]: I0115 00:45:45.577482 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:45:45.991409 kubelet[2399]: E0115 00:45:45.991041 2399 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 15 00:45:45.995177 kubelet[2399]: E0115 00:45:45.994973 2399 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:45:45.995177 kubelet[2399]: E0115 00:45:45.995092 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:46.004709 kubelet[2399]: E0115 00:45:46.004651 2399 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:45:46.004927 kubelet[2399]: E0115 00:45:46.004856 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:46.009481 kubelet[2399]: E0115 00:45:46.009462 2399 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:45:46.009975 kubelet[2399]: E0115 00:45:46.009923 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:46.079341 kubelet[2399]: I0115 00:45:46.079228 2399 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 00:45:46.079341 kubelet[2399]: E0115 00:45:46.079272 2399 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 15 00:45:46.128451 kubelet[2399]: I0115 00:45:46.128400 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:46.139575 kubelet[2399]: E0115 00:45:46.139419 2399 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:46.139575 kubelet[2399]: I0115 00:45:46.139483 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:46.142314 kubelet[2399]: E0115 00:45:46.142189 2399 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:46.142314 kubelet[2399]: I0115 00:45:46.142209 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:46.144424 kubelet[2399]: E0115 00:45:46.144268 2399 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:46.913094 kubelet[2399]: I0115 00:45:46.913008 2399 apiserver.go:52] "Watching apiserver" Jan 15 00:45:46.926913 kubelet[2399]: I0115 00:45:46.926825 2399 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:45:47.009139 kubelet[2399]: I0115 00:45:47.008919 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:47.009139 kubelet[2399]: I0115 00:45:47.008972 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:47.009937 kubelet[2399]: I0115 00:45:47.009689 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:47.016009 kubelet[2399]: E0115 00:45:47.015945 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:47.018945 kubelet[2399]: E0115 00:45:47.018848 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:47.020347 kubelet[2399]: E0115 00:45:47.020304 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:48.011013 kubelet[2399]: E0115 00:45:48.010955 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:48.011438 kubelet[2399]: E0115 00:45:48.011066 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:48.011438 kubelet[2399]: I0115 00:45:48.011191 2399 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:48.020491 kubelet[2399]: E0115 00:45:48.020413 2399 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:48.020752 kubelet[2399]: E0115 00:45:48.020600 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:48.457426 systemd[1]: Reload requested from client PID 2672 ('systemctl') (unit session-7.scope)... Jan 15 00:45:48.457468 systemd[1]: Reloading... Jan 15 00:45:48.554610 zram_generator::config[2721]: No configuration found. Jan 15 00:45:48.790328 systemd[1]: Reloading finished in 332 ms. Jan 15 00:45:48.831083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:48.852976 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 00:45:48.853444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:48.853615 systemd[1]: kubelet.service: Consumed 987ms CPU time, 129.5M memory peak. Jan 15 00:45:48.856072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:45:49.081695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:45:49.089043 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:45:49.164107 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:45:49.164107 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:45:49.164107 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:45:49.164107 kubelet[2763]: I0115 00:45:49.163915 2763 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:45:49.174204 kubelet[2763]: I0115 00:45:49.174138 2763 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:45:49.174204 kubelet[2763]: I0115 00:45:49.174184 2763 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:45:49.174425 kubelet[2763]: I0115 00:45:49.174379 2763 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:45:49.175585 kubelet[2763]: I0115 00:45:49.175475 2763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 00:45:49.177953 kubelet[2763]: I0115 00:45:49.177587 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:45:49.182132 kubelet[2763]: I0115 00:45:49.182099 2763 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:45:49.190043 kubelet[2763]: I0115 00:45:49.189977 2763 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:45:49.190374 kubelet[2763]: I0115 00:45:49.190300 2763 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:45:49.190556 kubelet[2763]: I0115 00:45:49.190344 2763 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:45:49.190665 kubelet[2763]: I0115 00:45:49.190566 2763 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:45:49.190665 kubelet[2763]: I0115 00:45:49.190584 2763 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:45:49.190665 kubelet[2763]: I0115 00:45:49.190638 2763 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:45:49.190949 kubelet[2763]: I0115 00:45:49.190873 2763 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:45:49.190949 kubelet[2763]: I0115 00:45:49.190937 2763 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:45:49.191112 kubelet[2763]: I0115 00:45:49.191011 2763 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:45:49.191112 kubelet[2763]: I0115 00:45:49.191030 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:45:49.191986 kubelet[2763]: I0115 00:45:49.191917 2763 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:45:49.192519 kubelet[2763]: I0115 00:45:49.192451 2763 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:45:49.193293 kubelet[2763]: I0115 00:45:49.193224 2763 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:45:49.193293 kubelet[2763]: I0115 00:45:49.193286 2763 server.go:1287] "Started kubelet" Jan 15 00:45:49.195467 kubelet[2763]: I0115 00:45:49.195247 2763 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:45:49.195810 kubelet[2763]: I0115 00:45:49.195561 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:45:49.195963 kubelet[2763]: I0115 00:45:49.195935 2763 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:45:49.197888 kubelet[2763]: I0115 00:45:49.197850 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:45:49.199832 kubelet[2763]: I0115 00:45:49.198710 2763 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:45:49.203924 kubelet[2763]: I0115 00:45:49.203834 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:45:49.207204 kubelet[2763]: E0115 00:45:49.207058 2763 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:45:49.209277 kubelet[2763]: I0115 00:45:49.209146 2763 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:45:49.209277 kubelet[2763]: I0115 00:45:49.209276 2763 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:45:49.209432 kubelet[2763]: I0115 00:45:49.209389 2763 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:45:49.210246 kubelet[2763]: I0115 00:45:49.210189 2763 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:45:49.210363 kubelet[2763]: I0115 00:45:49.210304 2763 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:45:49.213897 kubelet[2763]: I0115 00:45:49.213836 2763 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:45:49.230094 kubelet[2763]: I0115 00:45:49.229940 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:45:49.234556 kubelet[2763]: I0115 00:45:49.234448 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:45:49.234647 kubelet[2763]: I0115 00:45:49.234604 2763 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:45:49.234647 kubelet[2763]: I0115 00:45:49.234628 2763 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:45:49.234647 kubelet[2763]: I0115 00:45:49.234635 2763 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:45:49.234790 kubelet[2763]: E0115 00:45:49.234680 2763 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:45:49.270008 kubelet[2763]: I0115 00:45:49.269940 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:45:49.270008 kubelet[2763]: I0115 00:45:49.269986 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:45:49.270008 kubelet[2763]: I0115 00:45:49.270009 2763 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:45:49.270200 kubelet[2763]: I0115 00:45:49.270162 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 00:45:49.270200 kubelet[2763]: I0115 00:45:49.270172 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 00:45:49.270200 kubelet[2763]: I0115 00:45:49.270188 2763 policy_none.go:49] "None policy: Start" Jan 15 00:45:49.270200 kubelet[2763]: I0115 00:45:49.270197 2763 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:45:49.270324 kubelet[2763]: I0115 00:45:49.270206 2763 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:45:49.270324 kubelet[2763]: I0115 00:45:49.270288 2763 state_mem.go:75] "Updated machine memory state" Jan 15 00:45:49.276349 kubelet[2763]: I0115 00:45:49.276299 2763 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:45:49.276606 kubelet[2763]: I0115 00:45:49.276464 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:45:49.276674 kubelet[2763]: I0115 00:45:49.276588 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:45:49.277164 kubelet[2763]: I0115 00:45:49.277088 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:45:49.280100 kubelet[2763]: E0115 00:45:49.280037 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:45:49.336113 kubelet[2763]: I0115 00:45:49.335862 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.336113 kubelet[2763]: I0115 00:45:49.336006 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:49.336360 kubelet[2763]: I0115 00:45:49.335862 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:49.348205 kubelet[2763]: E0115 00:45:49.348085 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.349422 kubelet[2763]: E0115 00:45:49.349335 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:49.349555 kubelet[2763]: E0115 00:45:49.349433 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:49.387314 kubelet[2763]: I0115 00:45:49.387177 2763 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:45:49.401933 kubelet[2763]: I0115 00:45:49.401799 2763 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 15 00:45:49.401933 kubelet[2763]: I0115 00:45:49.401939 2763 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 00:45:49.510611 kubelet[2763]: I0115 00:45:49.510496 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.510611 kubelet[2763]: I0115 00:45:49.510588 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00407fdca98902c035255c350cd77970-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00407fdca98902c035255c350cd77970\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:49.510611 kubelet[2763]: I0115 00:45:49.510606 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.510823 kubelet[2763]: I0115 00:45:49.510623 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.510823 kubelet[2763]: I0115 00:45:49.510637 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.510823 kubelet[2763]: I0115 00:45:49.510651 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:45:49.511201 kubelet[2763]: I0115 00:45:49.510991 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:49.511292 kubelet[2763]: I0115 00:45:49.511236 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00407fdca98902c035255c350cd77970-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00407fdca98902c035255c350cd77970\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:49.511292 kubelet[2763]: I0115 00:45:49.511271 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00407fdca98902c035255c350cd77970-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00407fdca98902c035255c350cd77970\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:49.650171 kubelet[2763]: E0115 00:45:49.649892 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:49.650171 kubelet[2763]: E0115 00:45:49.649888 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:49.650171 kubelet[2763]: E0115 00:45:49.650091 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:50.205830 kubelet[2763]: I0115 00:45:50.203090 2763 apiserver.go:52] "Watching apiserver" Jan 15 00:45:50.255940 kubelet[2763]: I0115 00:45:50.255869 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:50.256405 kubelet[2763]: I0115 00:45:50.256194 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:50.260671 kubelet[2763]: E0115 00:45:50.259634 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:50.277821 kubelet[2763]: E0115 00:45:50.277699 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 15 00:45:50.278180 kubelet[2763]: E0115 00:45:50.278035 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:50.279272 kubelet[2763]: E0115 00:45:50.279250 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 15 00:45:50.280109 kubelet[2763]: E0115 00:45:50.280036 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:50.315473 kubelet[2763]: I0115 00:45:50.315417 2763 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:45:50.322403 kubelet[2763]: I0115 00:45:50.322144 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.322119621 podStartE2EDuration="3.322119621s" podCreationTimestamp="2026-01-15 00:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:45:50.321955602 +0000 UTC m=+1.218746462" watchObservedRunningTime="2026-01-15 00:45:50.322119621 +0000 UTC m=+1.218910462" Jan 15 00:45:50.350656 kubelet[2763]: I0115 00:45:50.350457 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.350438671 podStartE2EDuration="3.350438671s" podCreationTimestamp="2026-01-15 00:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:45:50.349657673 +0000 UTC m=+1.246448513" watchObservedRunningTime="2026-01-15 00:45:50.350438671 +0000 UTC m=+1.247229532" Jan 15 00:45:51.171824 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 15 00:45:51.178679 sshd[1815]: Connection closed by 10.0.0.1 port 33340 Jan 15 00:45:51.178388 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 15 00:45:51.186668 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:33340.service: Deactivated successfully. Jan 15 00:45:51.192031 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 00:45:51.192700 systemd[1]: session-7.scope: Consumed 5.274s CPU time, 218.2M memory peak. Jan 15 00:45:51.201867 systemd-logind[1618]: Session 7 logged out. Waiting for processes to exit. Jan 15 00:45:51.204407 systemd-logind[1618]: Removed session 7. Jan 15 00:45:51.257202 kubelet[2763]: E0115 00:45:51.257069 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:51.257202 kubelet[2763]: E0115 00:45:51.257103 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:51.257202 kubelet[2763]: E0115 00:45:51.257150 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:52.785034 kubelet[2763]: E0115 00:45:52.784840 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:54.086271 kubelet[2763]: I0115 00:45:54.086153 2763 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 00:45:54.087374 containerd[1634]: time="2026-01-15T00:45:54.087331005Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 00:45:54.087998 kubelet[2763]: I0115 00:45:54.087968 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 00:45:55.031871 kubelet[2763]: I0115 00:45:55.030235 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.028901162 podStartE2EDuration="8.028901162s" podCreationTimestamp="2026-01-15 00:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:45:50.407719681 +0000 UTC m=+1.304510552" watchObservedRunningTime="2026-01-15 00:45:55.028901162 +0000 UTC m=+5.925692012" Jan 15 00:45:55.059681 systemd[1]: Created slice kubepods-besteffort-podf1d6144d_686a_41c9_94ae_ce84e265163a.slice - libcontainer container kubepods-besteffort-podf1d6144d_686a_41c9_94ae_ce84e265163a.slice. Jan 15 00:45:55.120394 kubelet[2763]: I0115 00:45:55.120303 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1d6144d-686a-41c9-94ae-ce84e265163a-xtables-lock\") pod \"kube-proxy-srddd\" (UID: \"f1d6144d-686a-41c9-94ae-ce84e265163a\") " pod="kube-system/kube-proxy-srddd" Jan 15 00:45:55.121287 systemd[1]: Created slice kubepods-burstable-poddce3a957_4451_4f16_a45a_ff75f3606c05.slice - libcontainer container kubepods-burstable-poddce3a957_4451_4f16_a45a_ff75f3606c05.slice. Jan 15 00:45:55.122458 kubelet[2763]: I0115 00:45:55.122282 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1d6144d-686a-41c9-94ae-ce84e265163a-lib-modules\") pod \"kube-proxy-srddd\" (UID: \"f1d6144d-686a-41c9-94ae-ce84e265163a\") " pod="kube-system/kube-proxy-srddd" Jan 15 00:45:55.122458 kubelet[2763]: I0115 00:45:55.122341 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/dce3a957-4451-4f16-a45a-ff75f3606c05-cni-plugin\") pod \"kube-flannel-ds-4n286\" (UID: \"dce3a957-4451-4f16-a45a-ff75f3606c05\") " pod="kube-flannel/kube-flannel-ds-4n286" Jan 15 00:45:55.122458 kubelet[2763]: I0115 00:45:55.122356 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dce3a957-4451-4f16-a45a-ff75f3606c05-xtables-lock\") pod \"kube-flannel-ds-4n286\" (UID: \"dce3a957-4451-4f16-a45a-ff75f3606c05\") " pod="kube-flannel/kube-flannel-ds-4n286" Jan 15 00:45:55.122458 kubelet[2763]: I0115 00:45:55.122369 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1d6144d-686a-41c9-94ae-ce84e265163a-kube-proxy\") pod \"kube-proxy-srddd\" (UID: \"f1d6144d-686a-41c9-94ae-ce84e265163a\") " pod="kube-system/kube-proxy-srddd" Jan 15 00:45:55.122458 kubelet[2763]: I0115 00:45:55.122385 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dce3a957-4451-4f16-a45a-ff75f3606c05-run\") pod \"kube-flannel-ds-4n286\" (UID: \"dce3a957-4451-4f16-a45a-ff75f3606c05\") " pod="kube-flannel/kube-flannel-ds-4n286" Jan 15 00:45:55.122673 kubelet[2763]: I0115 00:45:55.122398 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/dce3a957-4451-4f16-a45a-ff75f3606c05-cni\") pod \"kube-flannel-ds-4n286\" (UID: \"dce3a957-4451-4f16-a45a-ff75f3606c05\") " pod="kube-flannel/kube-flannel-ds-4n286" Jan 15 00:45:55.122673 kubelet[2763]: I0115 00:45:55.122425 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658h4\" (UniqueName: \"kubernetes.io/projected/f1d6144d-686a-41c9-94ae-ce84e265163a-kube-api-access-658h4\") pod \"kube-proxy-srddd\" (UID: \"f1d6144d-686a-41c9-94ae-ce84e265163a\") " pod="kube-system/kube-proxy-srddd" Jan 15 00:45:55.122673 kubelet[2763]: I0115 00:45:55.122453 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/dce3a957-4451-4f16-a45a-ff75f3606c05-flannel-cfg\") pod \"kube-flannel-ds-4n286\" (UID: \"dce3a957-4451-4f16-a45a-ff75f3606c05\") " pod="kube-flannel/kube-flannel-ds-4n286" Jan 15 00:45:55.122673 kubelet[2763]: I0115 00:45:55.122468 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbqz\" (UniqueName: \"kubernetes.io/projected/dce3a957-4451-4f16-a45a-ff75f3606c05-kube-api-access-wjbqz\") pod \"kube-flannel-ds-4n286\" (UID: \"dce3a957-4451-4f16-a45a-ff75f3606c05\") " pod="kube-flannel/kube-flannel-ds-4n286" Jan 15 00:45:55.386002 kubelet[2763]: E0115 00:45:55.385571 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:55.404406 containerd[1634]: time="2026-01-15T00:45:55.404271039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srddd,Uid:f1d6144d-686a-41c9-94ae-ce84e265163a,Namespace:kube-system,Attempt:0,}" Jan 15 00:45:55.431328 kubelet[2763]: E0115 00:45:55.431163 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:55.433933 containerd[1634]: time="2026-01-15T00:45:55.432221357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4n286,Uid:dce3a957-4451-4f16-a45a-ff75f3606c05,Namespace:kube-flannel,Attempt:0,}" Jan 15 00:45:55.511481 containerd[1634]: time="2026-01-15T00:45:55.511340657Z" level=info msg="connecting to shim 5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa" address="unix:///run/containerd/s/d257dced98576c4fb36cf9b9daef396f1f15f86b8a6e25bbe0d3031481500b06" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:45:55.513324 containerd[1634]: time="2026-01-15T00:45:55.513210811Z" level=info msg="connecting to shim 20b749bd0fa254309e9dad611fbe75763d099d2e5cb950ec94545ce64a20a7a7" address="unix:///run/containerd/s/805637f61dcd046d80296aab2005fbef20b25b5ff80fe630f2fffaf0662b9479" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:45:55.627738 systemd[1]: Started cri-containerd-20b749bd0fa254309e9dad611fbe75763d099d2e5cb950ec94545ce64a20a7a7.scope - libcontainer container 20b749bd0fa254309e9dad611fbe75763d099d2e5cb950ec94545ce64a20a7a7. Jan 15 00:45:55.647243 systemd[1]: Started cri-containerd-5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa.scope - libcontainer container 5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa. Jan 15 00:45:55.741569 containerd[1634]: time="2026-01-15T00:45:55.741394187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srddd,Uid:f1d6144d-686a-41c9-94ae-ce84e265163a,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b749bd0fa254309e9dad611fbe75763d099d2e5cb950ec94545ce64a20a7a7\"" Jan 15 00:45:55.743893 kubelet[2763]: E0115 00:45:55.743865 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:55.768119 containerd[1634]: time="2026-01-15T00:45:55.767996704Z" level=info msg="CreateContainer within sandbox \"20b749bd0fa254309e9dad611fbe75763d099d2e5cb950ec94545ce64a20a7a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 00:45:55.812646 containerd[1634]: time="2026-01-15T00:45:55.812461400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4n286,Uid:dce3a957-4451-4f16-a45a-ff75f3606c05,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\"" Jan 15 00:45:55.820084 kubelet[2763]: E0115 00:45:55.819937 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:55.822086 containerd[1634]: time="2026-01-15T00:45:55.821878590Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 15 00:45:55.831048 containerd[1634]: time="2026-01-15T00:45:55.830861765Z" level=info msg="Container a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:45:55.853862 containerd[1634]: time="2026-01-15T00:45:55.852254123Z" level=info msg="CreateContainer within sandbox \"20b749bd0fa254309e9dad611fbe75763d099d2e5cb950ec94545ce64a20a7a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0\"" Jan 15 00:45:55.855612 containerd[1634]: time="2026-01-15T00:45:55.855427983Z" level=info msg="StartContainer for \"a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0\"" Jan 15 00:45:55.860828 containerd[1634]: time="2026-01-15T00:45:55.860606363Z" level=info msg="connecting to shim a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0" address="unix:///run/containerd/s/805637f61dcd046d80296aab2005fbef20b25b5ff80fe630f2fffaf0662b9479" protocol=ttrpc version=3 Jan 15 00:45:55.929455 systemd[1]: Started cri-containerd-a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0.scope - libcontainer container a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0. Jan 15 00:45:56.074573 containerd[1634]: time="2026-01-15T00:45:56.073899559Z" level=info msg="StartContainer for \"a8a683fef66f7ed4c8628b6ff75f5256cb03f18a2664a3fd8f8253653fffe8e0\" returns successfully" Jan 15 00:45:56.306447 kubelet[2763]: E0115 00:45:56.306014 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:56.344479 kubelet[2763]: I0115 00:45:56.344278 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-srddd" podStartSLOduration=2.344225163 podStartE2EDuration="2.344225163s" podCreationTimestamp="2026-01-15 00:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:45:56.344196581 +0000 UTC m=+7.240987421" watchObservedRunningTime="2026-01-15 00:45:56.344225163 +0000 UTC m=+7.241016013" Jan 15 00:45:56.818742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178467274.mount: Deactivated successfully. Jan 15 00:45:56.913772 containerd[1634]: time="2026-01-15T00:45:56.913569208Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:56.916440 containerd[1634]: time="2026-01-15T00:45:56.916378461Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=2827571" Jan 15 00:45:56.919761 containerd[1634]: time="2026-01-15T00:45:56.919469426Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:56.927166 containerd[1634]: time="2026-01-15T00:45:56.926903334Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:45:56.931212 containerd[1634]: time="2026-01-15T00:45:56.931080582Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.109167258s" Jan 15 00:45:56.931212 containerd[1634]: time="2026-01-15T00:45:56.931152045Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 15 00:45:56.939783 containerd[1634]: time="2026-01-15T00:45:56.939622020Z" level=info msg="CreateContainer within sandbox \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 15 00:45:56.965609 containerd[1634]: time="2026-01-15T00:45:56.963641135Z" level=info msg="Container c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:45:56.979587 containerd[1634]: time="2026-01-15T00:45:56.979446738Z" level=info msg="CreateContainer within sandbox \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62\"" Jan 15 00:45:56.984638 containerd[1634]: time="2026-01-15T00:45:56.982921362Z" level=info msg="StartContainer for \"c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62\"" Jan 15 00:45:56.985227 containerd[1634]: time="2026-01-15T00:45:56.985147378Z" level=info msg="connecting to shim c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62" address="unix:///run/containerd/s/d257dced98576c4fb36cf9b9daef396f1f15f86b8a6e25bbe0d3031481500b06" protocol=ttrpc version=3 Jan 15 00:45:57.032825 systemd[1]: Started cri-containerd-c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62.scope - libcontainer container c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62. Jan 15 00:45:57.115212 systemd[1]: cri-containerd-c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62.scope: Deactivated successfully. Jan 15 00:45:57.120274 containerd[1634]: time="2026-01-15T00:45:57.120190150Z" level=info msg="StartContainer for \"c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62\" returns successfully" Jan 15 00:45:57.126857 containerd[1634]: time="2026-01-15T00:45:57.126791899Z" level=info msg="received container exit event container_id:\"c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62\" id:\"c6a9caffa1c186278d5fb2ef2c77217b8a46ada19d40aa26b14e170803b1bc62\" pid:3117 exited_at:{seconds:1768437957 nanos:124496731}" Jan 15 00:45:57.333151 kubelet[2763]: E0115 00:45:57.332984 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:57.335136 kubelet[2763]: E0115 00:45:57.332995 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:45:57.336334 containerd[1634]: time="2026-01-15T00:45:57.336210703Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 15 00:45:58.429063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486391577.mount: Deactivated successfully. Jan 15 00:46:00.076402 kubelet[2763]: E0115 00:46:00.076258 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:00.207480 update_engine[1620]: I20260115 00:46:00.207237 1620 update_attempter.cc:509] Updating boot flags... Jan 15 00:46:00.365134 kubelet[2763]: E0115 00:46:00.364827 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:00.385454 kubelet[2763]: E0115 00:46:00.384751 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:01.738989 containerd[1634]: time="2026-01-15T00:46:01.738928071Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:46:01.740411 containerd[1634]: time="2026-01-15T00:46:01.740269291Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=18277016" Jan 15 00:46:01.742093 containerd[1634]: time="2026-01-15T00:46:01.741962557Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:46:01.744856 containerd[1634]: time="2026-01-15T00:46:01.744767483Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:46:01.745930 containerd[1634]: time="2026-01-15T00:46:01.745797428Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.4095189s" Jan 15 00:46:01.745930 containerd[1634]: time="2026-01-15T00:46:01.745827645Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 15 00:46:01.748978 containerd[1634]: time="2026-01-15T00:46:01.748926451Z" level=info msg="CreateContainer within sandbox \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 00:46:01.758921 containerd[1634]: time="2026-01-15T00:46:01.758803578Z" level=info msg="Container 99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:46:01.767059 containerd[1634]: time="2026-01-15T00:46:01.766937136Z" level=info msg="CreateContainer within sandbox \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714\"" Jan 15 00:46:01.769434 containerd[1634]: time="2026-01-15T00:46:01.767606651Z" level=info msg="StartContainer for \"99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714\"" Jan 15 00:46:01.769434 containerd[1634]: time="2026-01-15T00:46:01.769045909Z" level=info msg="connecting to shim 99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714" address="unix:///run/containerd/s/d257dced98576c4fb36cf9b9daef396f1f15f86b8a6e25bbe0d3031481500b06" protocol=ttrpc version=3 Jan 15 00:46:01.802771 systemd[1]: Started cri-containerd-99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714.scope - libcontainer container 99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714. Jan 15 00:46:01.860147 systemd[1]: cri-containerd-99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714.scope: Deactivated successfully. Jan 15 00:46:01.862914 containerd[1634]: time="2026-01-15T00:46:01.862815437Z" level=info msg="received container exit event container_id:\"99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714\" id:\"99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714\" pid:3203 exited_at:{seconds:1768437961 nanos:860789497}" Jan 15 00:46:01.869297 kubelet[2763]: I0115 00:46:01.869225 2763 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 00:46:01.875117 containerd[1634]: time="2026-01-15T00:46:01.875028223Z" level=info msg="StartContainer for \"99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714\" returns successfully" Jan 15 00:46:01.906737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99d538fe3fa1bda5687b3d77e15d1868117897ce17d8d90d91c14fe5bfa9a714-rootfs.mount: Deactivated successfully. Jan 15 00:46:01.922951 systemd[1]: Created slice kubepods-burstable-pod0ffea7e8_ef78_457a_aa06_9435b83de225.slice - libcontainer container kubepods-burstable-pod0ffea7e8_ef78_457a_aa06_9435b83de225.slice. Jan 15 00:46:01.932178 systemd[1]: Created slice kubepods-burstable-podc87b7d23_9427_4cf2_92f5_d8a7e7b47ff4.slice - libcontainer container kubepods-burstable-podc87b7d23_9427_4cf2_92f5_d8a7e7b47ff4.slice. Jan 15 00:46:02.036050 kubelet[2763]: I0115 00:46:02.035824 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqk4\" (UniqueName: \"kubernetes.io/projected/0ffea7e8-ef78-457a-aa06-9435b83de225-kube-api-access-hkqk4\") pod \"coredns-668d6bf9bc-4gdg8\" (UID: \"0ffea7e8-ef78-457a-aa06-9435b83de225\") " pod="kube-system/coredns-668d6bf9bc-4gdg8" Jan 15 00:46:02.036050 kubelet[2763]: I0115 00:46:02.035958 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ffea7e8-ef78-457a-aa06-9435b83de225-config-volume\") pod \"coredns-668d6bf9bc-4gdg8\" (UID: \"0ffea7e8-ef78-457a-aa06-9435b83de225\") " pod="kube-system/coredns-668d6bf9bc-4gdg8" Jan 15 00:46:02.036050 kubelet[2763]: I0115 00:46:02.035989 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4-config-volume\") pod \"coredns-668d6bf9bc-7sqbr\" (UID: \"c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4\") " pod="kube-system/coredns-668d6bf9bc-7sqbr" Jan 15 00:46:02.036050 kubelet[2763]: I0115 00:46:02.036014 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8547q\" (UniqueName: \"kubernetes.io/projected/c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4-kube-api-access-8547q\") pod \"coredns-668d6bf9bc-7sqbr\" (UID: \"c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4\") " pod="kube-system/coredns-668d6bf9bc-7sqbr" Jan 15 00:46:02.238361 kubelet[2763]: E0115 00:46:02.238184 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:02.238361 kubelet[2763]: E0115 00:46:02.238286 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:02.239170 containerd[1634]: time="2026-01-15T00:46:02.239028853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sqbr,Uid:c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4,Namespace:kube-system,Attempt:0,}" Jan 15 00:46:02.239704 containerd[1634]: time="2026-01-15T00:46:02.239588714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4gdg8,Uid:0ffea7e8-ef78-457a-aa06-9435b83de225,Namespace:kube-system,Attempt:0,}" Jan 15 00:46:02.288596 containerd[1634]: time="2026-01-15T00:46:02.288215931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4gdg8,Uid:0ffea7e8-ef78-457a-aa06-9435b83de225,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f8207794f53be24aa41a458e10b9f5569c378f18dad1de70ae66eceaebef90d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 15 00:46:02.288975 kubelet[2763]: E0115 00:46:02.288815 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f8207794f53be24aa41a458e10b9f5569c378f18dad1de70ae66eceaebef90d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 15 00:46:02.289048 kubelet[2763]: E0115 00:46:02.288975 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f8207794f53be24aa41a458e10b9f5569c378f18dad1de70ae66eceaebef90d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4gdg8" Jan 15 00:46:02.289091 kubelet[2763]: E0115 00:46:02.289045 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f8207794f53be24aa41a458e10b9f5569c378f18dad1de70ae66eceaebef90d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4gdg8" Jan 15 00:46:02.289207 kubelet[2763]: E0115 00:46:02.289117 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4gdg8_kube-system(0ffea7e8-ef78-457a-aa06-9435b83de225)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4gdg8_kube-system(0ffea7e8-ef78-457a-aa06-9435b83de225)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f8207794f53be24aa41a458e10b9f5569c378f18dad1de70ae66eceaebef90d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-4gdg8" podUID="0ffea7e8-ef78-457a-aa06-9435b83de225" Jan 15 00:46:02.291391 containerd[1634]: time="2026-01-15T00:46:02.291347470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sqbr,Uid:c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b130090c726744110059cca0783f340cf8b18799344c8f537f52ce396b2a34ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 15 00:46:02.291806 kubelet[2763]: E0115 00:46:02.291747 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b130090c726744110059cca0783f340cf8b18799344c8f537f52ce396b2a34ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 15 00:46:02.291867 kubelet[2763]: E0115 00:46:02.291824 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b130090c726744110059cca0783f340cf8b18799344c8f537f52ce396b2a34ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7sqbr" Jan 15 00:46:02.291867 kubelet[2763]: E0115 00:46:02.291847 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b130090c726744110059cca0783f340cf8b18799344c8f537f52ce396b2a34ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7sqbr" Jan 15 00:46:02.292012 kubelet[2763]: E0115 00:46:02.291889 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7sqbr_kube-system(c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7sqbr_kube-system(c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b130090c726744110059cca0783f340cf8b18799344c8f537f52ce396b2a34ed\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-7sqbr" podUID="c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4" Jan 15 00:46:02.373587 kubelet[2763]: E0115 00:46:02.373441 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:02.378633 containerd[1634]: time="2026-01-15T00:46:02.378480704Z" level=info msg="CreateContainer within sandbox \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 15 00:46:02.393366 containerd[1634]: time="2026-01-15T00:46:02.393292600Z" level=info msg="Container 21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:46:02.401057 containerd[1634]: time="2026-01-15T00:46:02.400979480Z" level=info msg="CreateContainer within sandbox \"5ea939eb990ae57239a00715c425d0b7ee86aa018fc0bbdf74d144ccddd103aa\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f\"" Jan 15 00:46:02.402216 containerd[1634]: time="2026-01-15T00:46:02.402133359Z" level=info msg="StartContainer for \"21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f\"" Jan 15 00:46:02.403470 containerd[1634]: time="2026-01-15T00:46:02.403364230Z" level=info msg="connecting to shim 21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f" address="unix:///run/containerd/s/d257dced98576c4fb36cf9b9daef396f1f15f86b8a6e25bbe0d3031481500b06" protocol=ttrpc version=3 Jan 15 00:46:02.433923 systemd[1]: Started cri-containerd-21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f.scope - libcontainer container 21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f. Jan 15 00:46:02.479420 containerd[1634]: time="2026-01-15T00:46:02.479264110Z" level=info msg="StartContainer for \"21e4ac202bf222a452ab6adfea808644e46d5ddc56a04ea23f7094e4ccb6287f\" returns successfully" Jan 15 00:46:02.792795 kubelet[2763]: E0115 00:46:02.792585 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:03.379233 kubelet[2763]: E0115 00:46:03.379092 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:03.397366 kubelet[2763]: I0115 00:46:03.397118 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4n286" podStartSLOduration=2.471367147 podStartE2EDuration="8.397104268s" podCreationTimestamp="2026-01-15 00:45:55 +0000 UTC" firstStartedPulling="2026-01-15 00:45:55.821137664 +0000 UTC m=+6.717928503" lastFinishedPulling="2026-01-15 00:46:01.746874783 +0000 UTC m=+12.643665624" observedRunningTime="2026-01-15 00:46:03.396127371 +0000 UTC m=+14.292918211" watchObservedRunningTime="2026-01-15 00:46:03.397104268 +0000 UTC m=+14.293895108" Jan 15 00:46:03.552960 systemd-networkd[1549]: flannel.1: Link UP Jan 15 00:46:03.552971 systemd-networkd[1549]: flannel.1: Gained carrier Jan 15 00:46:04.382189 kubelet[2763]: E0115 00:46:04.382099 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:05.011879 systemd-networkd[1549]: flannel.1: Gained IPv6LL Jan 15 00:46:14.348030 kubelet[2763]: E0115 00:46:14.347776 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:14.376063 containerd[1634]: time="2026-01-15T00:46:14.349299875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4gdg8,Uid:0ffea7e8-ef78-457a-aa06-9435b83de225,Namespace:kube-system,Attempt:0,}" Jan 15 00:46:14.420344 systemd-networkd[1549]: cni0: Link UP Jan 15 00:46:14.420357 systemd-networkd[1549]: cni0: Gained carrier Jan 15 00:46:14.441095 systemd-networkd[1549]: cni0: Lost carrier Jan 15 00:46:14.443490 systemd-networkd[1549]: veth6bd41e46: Link UP Jan 15 00:46:14.456318 kernel: cni0: port 1(veth6bd41e46) entered blocking state Jan 15 00:46:14.456686 kernel: cni0: port 1(veth6bd41e46) entered disabled state Jan 15 00:46:14.456739 kernel: veth6bd41e46: entered allmulticast mode Jan 15 00:46:14.460692 kernel: veth6bd41e46: entered promiscuous mode Jan 15 00:46:14.482806 kernel: cni0: port 1(veth6bd41e46) entered blocking state Jan 15 00:46:14.483046 kernel: cni0: port 1(veth6bd41e46) entered forwarding state Jan 15 00:46:14.483143 systemd-networkd[1549]: veth6bd41e46: Gained carrier Jan 15 00:46:14.483902 systemd-networkd[1549]: cni0: Gained carrier Jan 15 00:46:14.501975 containerd[1634]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000020938), "name":"cbr0", "type":"bridge"} Jan 15 00:46:14.501975 containerd[1634]: delegateAdd: netconf sent to delegate plugin: Jan 15 00:46:14.554940 containerd[1634]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-15T00:46:14.554895139Z" level=info msg="connecting to shim 3afb283ca59bba65abf6f7a2eeba2cdae62ae03746ff907273b66b6ead7d580b" address="unix:///run/containerd/s/4c2cab336dcbd6023119900abe7172fa642c11434b79fada0612949d3e01b7f0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:46:14.614873 systemd[1]: Started cri-containerd-3afb283ca59bba65abf6f7a2eeba2cdae62ae03746ff907273b66b6ead7d580b.scope - libcontainer container 3afb283ca59bba65abf6f7a2eeba2cdae62ae03746ff907273b66b6ead7d580b. Jan 15 00:46:14.640297 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:46:14.699346 containerd[1634]: time="2026-01-15T00:46:14.699248182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4gdg8,Uid:0ffea7e8-ef78-457a-aa06-9435b83de225,Namespace:kube-system,Attempt:0,} returns sandbox id \"3afb283ca59bba65abf6f7a2eeba2cdae62ae03746ff907273b66b6ead7d580b\"" Jan 15 00:46:14.700821 kubelet[2763]: E0115 00:46:14.700706 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:14.704655 containerd[1634]: time="2026-01-15T00:46:14.704429034Z" level=info msg="CreateContainer within sandbox \"3afb283ca59bba65abf6f7a2eeba2cdae62ae03746ff907273b66b6ead7d580b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:46:14.720021 containerd[1634]: time="2026-01-15T00:46:14.719897057Z" level=info msg="Container 6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:46:14.722158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419726775.mount: Deactivated successfully. Jan 15 00:46:14.736353 containerd[1634]: time="2026-01-15T00:46:14.736191118Z" level=info msg="CreateContainer within sandbox \"3afb283ca59bba65abf6f7a2eeba2cdae62ae03746ff907273b66b6ead7d580b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7\"" Jan 15 00:46:14.737556 containerd[1634]: time="2026-01-15T00:46:14.737072392Z" level=info msg="StartContainer for \"6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7\"" Jan 15 00:46:14.738786 containerd[1634]: time="2026-01-15T00:46:14.738717318Z" level=info msg="connecting to shim 6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7" address="unix:///run/containerd/s/4c2cab336dcbd6023119900abe7172fa642c11434b79fada0612949d3e01b7f0" protocol=ttrpc version=3 Jan 15 00:46:14.793185 systemd[1]: Started cri-containerd-6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7.scope - libcontainer container 6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7. Jan 15 00:46:14.877746 containerd[1634]: time="2026-01-15T00:46:14.876743239Z" level=info msg="StartContainer for \"6e07068ec627ef8efe99418eb882c6aa66041b93cef71f689098f57dbedb30d7\" returns successfully" Jan 15 00:46:15.237939 kubelet[2763]: E0115 00:46:15.237837 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:15.238977 containerd[1634]: time="2026-01-15T00:46:15.238811777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sqbr,Uid:c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4,Namespace:kube-system,Attempt:0,}" Jan 15 00:46:15.261460 systemd-networkd[1549]: vetha38b6299: Link UP Jan 15 00:46:15.267597 kernel: cni0: port 2(vetha38b6299) entered blocking state Jan 15 00:46:15.267747 kernel: cni0: port 2(vetha38b6299) entered disabled state Jan 15 00:46:15.270145 kernel: vetha38b6299: entered allmulticast mode Jan 15 00:46:15.272759 kernel: vetha38b6299: entered promiscuous mode Jan 15 00:46:15.284605 kernel: cni0: port 2(vetha38b6299) entered blocking state Jan 15 00:46:15.284744 kernel: cni0: port 2(vetha38b6299) entered forwarding state Jan 15 00:46:15.284767 systemd-networkd[1549]: vetha38b6299: Gained carrier Jan 15 00:46:15.288930 containerd[1634]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011a8e8), "name":"cbr0", "type":"bridge"} Jan 15 00:46:15.288930 containerd[1634]: delegateAdd: netconf sent to delegate plugin: Jan 15 00:46:15.336598 containerd[1634]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-15T00:46:15.336343755Z" level=info msg="connecting to shim af50bcbdd804745bb8cab7a16bbf75b70831677b1e994b3f06d0288db1995697" address="unix:///run/containerd/s/eef265ded5e20e2794d0e3aef91b1ca0f1469348a04bef9055268d29f9c23409" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:46:15.372764 systemd[1]: Started cri-containerd-af50bcbdd804745bb8cab7a16bbf75b70831677b1e994b3f06d0288db1995697.scope - libcontainer container af50bcbdd804745bb8cab7a16bbf75b70831677b1e994b3f06d0288db1995697. Jan 15 00:46:15.403582 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:46:15.451769 containerd[1634]: time="2026-01-15T00:46:15.451373178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sqbr,Uid:c87b7d23-9427-4cf2-92f5-d8a7e7b47ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"af50bcbdd804745bb8cab7a16bbf75b70831677b1e994b3f06d0288db1995697\"" Jan 15 00:46:15.452907 kubelet[2763]: E0115 00:46:15.452743 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:15.458133 containerd[1634]: time="2026-01-15T00:46:15.457923846Z" level=info msg="CreateContainer within sandbox \"af50bcbdd804745bb8cab7a16bbf75b70831677b1e994b3f06d0288db1995697\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:46:15.462211 kubelet[2763]: E0115 00:46:15.462071 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:15.478247 containerd[1634]: time="2026-01-15T00:46:15.477334892Z" level=info msg="Container f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:46:15.499943 containerd[1634]: time="2026-01-15T00:46:15.498895664Z" level=info msg="CreateContainer within sandbox \"af50bcbdd804745bb8cab7a16bbf75b70831677b1e994b3f06d0288db1995697\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb\"" Jan 15 00:46:15.501006 containerd[1634]: time="2026-01-15T00:46:15.500911392Z" level=info msg="StartContainer for \"f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb\"" Jan 15 00:46:15.502471 containerd[1634]: time="2026-01-15T00:46:15.502259827Z" level=info msg="connecting to shim f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb" address="unix:///run/containerd/s/eef265ded5e20e2794d0e3aef91b1ca0f1469348a04bef9055268d29f9c23409" protocol=ttrpc version=3 Jan 15 00:46:15.545573 kubelet[2763]: I0115 00:46:15.545199 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4gdg8" podStartSLOduration=20.545179219 podStartE2EDuration="20.545179219s" podCreationTimestamp="2026-01-15 00:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:46:15.485417577 +0000 UTC m=+26.382208417" watchObservedRunningTime="2026-01-15 00:46:15.545179219 +0000 UTC m=+26.441970060" Jan 15 00:46:15.545851 systemd[1]: Started cri-containerd-f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb.scope - libcontainer container f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb. Jan 15 00:46:15.616890 containerd[1634]: time="2026-01-15T00:46:15.616790541Z" level=info msg="StartContainer for \"f2f22585c7863f1334cab1ba2b1baf1a363bb493f184359d125e3b35857cf8bb\" returns successfully" Jan 15 00:46:15.636877 systemd-networkd[1549]: veth6bd41e46: Gained IPv6LL Jan 15 00:46:16.276022 systemd-networkd[1549]: cni0: Gained IPv6LL Jan 15 00:46:16.403901 systemd-networkd[1549]: vetha38b6299: Gained IPv6LL Jan 15 00:46:16.468256 kubelet[2763]: E0115 00:46:16.468223 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:16.468256 kubelet[2763]: E0115 00:46:16.468274 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:17.470907 kubelet[2763]: E0115 00:46:17.470097 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:17.470907 kubelet[2763]: E0115 00:46:17.470767 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:17.488147 kubelet[2763]: I0115 00:46:17.488055 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7sqbr" podStartSLOduration=22.488040163 podStartE2EDuration="22.488040163s" podCreationTimestamp="2026-01-15 00:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:46:16.485717526 +0000 UTC m=+27.382508366" watchObservedRunningTime="2026-01-15 00:46:17.488040163 +0000 UTC m=+28.384831013" Jan 15 00:46:18.473288 kubelet[2763]: E0115 00:46:18.473133 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:46:19.476101 kubelet[2763]: E0115 00:46:19.476009 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:08.236597 kubelet[2763]: E0115 00:47:08.236383 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:10.236326 kubelet[2763]: E0115 00:47:10.236233 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:16.236411 kubelet[2763]: E0115 00:47:16.236300 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:16.879177 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:42262.service - OpenSSH per-connection server daemon (10.0.0.1:42262). Jan 15 00:47:16.951875 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 42262 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:16.953687 sshd-session[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:16.961321 systemd-logind[1618]: New session 8 of user core. Jan 15 00:47:16.967806 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 00:47:17.074704 sshd[3966]: Connection closed by 10.0.0.1 port 42262 Jan 15 00:47:17.075078 sshd-session[3963]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:17.081582 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:42262.service: Deactivated successfully. Jan 15 00:47:17.084404 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 00:47:17.085981 systemd-logind[1618]: Session 8 logged out. Waiting for processes to exit. Jan 15 00:47:17.087941 systemd-logind[1618]: Removed session 8. Jan 15 00:47:20.236799 kubelet[2763]: E0115 00:47:20.236637 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:22.091395 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). Jan 15 00:47:22.157373 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:22.159335 sshd-session[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:22.166153 systemd-logind[1618]: New session 9 of user core. Jan 15 00:47:22.175837 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 00:47:22.235576 kubelet[2763]: E0115 00:47:22.235377 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:22.267859 sshd[4005]: Connection closed by 10.0.0.1 port 42268 Jan 15 00:47:22.268261 sshd-session[4002]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:22.274744 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:42268.service: Deactivated successfully. Jan 15 00:47:22.277470 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 00:47:22.279280 systemd-logind[1618]: Session 9 logged out. Waiting for processes to exit. Jan 15 00:47:22.281017 systemd-logind[1618]: Removed session 9. Jan 15 00:47:27.281890 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:45116.service - OpenSSH per-connection server daemon (10.0.0.1:45116). Jan 15 00:47:27.352377 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 45116 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:27.354330 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:27.361047 systemd-logind[1618]: New session 10 of user core. Jan 15 00:47:27.368837 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 00:47:27.465923 sshd[4045]: Connection closed by 10.0.0.1 port 45116 Jan 15 00:47:27.466303 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:27.472727 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:45116.service: Deactivated successfully. Jan 15 00:47:27.475216 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 00:47:27.476928 systemd-logind[1618]: Session 10 logged out. Waiting for processes to exit. Jan 15 00:47:27.479082 systemd-logind[1618]: Removed session 10. Jan 15 00:47:31.235612 kubelet[2763]: E0115 00:47:31.235576 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:32.484926 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:34392.service - OpenSSH per-connection server daemon (10.0.0.1:34392). Jan 15 00:47:32.555338 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 34392 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:32.557118 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:32.563854 systemd-logind[1618]: New session 11 of user core. Jan 15 00:47:32.577873 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 00:47:32.681820 sshd[4083]: Connection closed by 10.0.0.1 port 34392 Jan 15 00:47:32.683842 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:32.691722 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:34392.service: Deactivated successfully. Jan 15 00:47:32.694248 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 00:47:32.696303 systemd-logind[1618]: Session 11 logged out. Waiting for processes to exit. Jan 15 00:47:32.701134 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Jan 15 00:47:32.702349 systemd-logind[1618]: Removed session 11. Jan 15 00:47:32.772709 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:32.774642 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:32.782115 systemd-logind[1618]: New session 12 of user core. Jan 15 00:47:32.791943 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 00:47:32.934615 sshd[4100]: Connection closed by 10.0.0.1 port 34394 Jan 15 00:47:32.935077 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:32.950732 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:34394.service: Deactivated successfully. Jan 15 00:47:32.954019 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 00:47:32.955152 systemd-logind[1618]: Session 12 logged out. Waiting for processes to exit. Jan 15 00:47:32.964135 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:34404.service - OpenSSH per-connection server daemon (10.0.0.1:34404). Jan 15 00:47:32.967901 systemd-logind[1618]: Removed session 12. Jan 15 00:47:33.052290 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 34404 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:33.053339 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:33.059981 systemd-logind[1618]: New session 13 of user core. Jan 15 00:47:33.078191 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 00:47:33.174123 sshd[4115]: Connection closed by 10.0.0.1 port 34404 Jan 15 00:47:33.174621 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:33.179961 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:34404.service: Deactivated successfully. Jan 15 00:47:33.182812 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 00:47:33.184270 systemd-logind[1618]: Session 13 logged out. Waiting for processes to exit. Jan 15 00:47:33.186005 systemd-logind[1618]: Removed session 13. Jan 15 00:47:38.198992 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). Jan 15 00:47:38.268372 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:38.270625 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:38.278342 systemd-logind[1618]: New session 14 of user core. Jan 15 00:47:38.284811 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 00:47:38.394837 sshd[4152]: Connection closed by 10.0.0.1 port 34412 Jan 15 00:47:38.395383 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:38.400111 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:34412.service: Deactivated successfully. Jan 15 00:47:38.403044 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 00:47:38.404638 systemd-logind[1618]: Session 14 logged out. Waiting for processes to exit. Jan 15 00:47:38.407637 systemd-logind[1618]: Removed session 14. Jan 15 00:47:40.236167 kubelet[2763]: E0115 00:47:40.236069 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:47:43.408466 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:52116.service - OpenSSH per-connection server daemon (10.0.0.1:52116). Jan 15 00:47:43.488709 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 52116 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:43.490412 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:43.498606 systemd-logind[1618]: New session 15 of user core. Jan 15 00:47:43.517753 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 00:47:43.623189 sshd[4190]: Connection closed by 10.0.0.1 port 52116 Jan 15 00:47:43.623724 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:43.630069 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:52116.service: Deactivated successfully. Jan 15 00:47:43.633172 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 00:47:43.634858 systemd-logind[1618]: Session 15 logged out. Waiting for processes to exit. Jan 15 00:47:43.636845 systemd-logind[1618]: Removed session 15. Jan 15 00:47:48.636999 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:52118.service - OpenSSH per-connection server daemon (10.0.0.1:52118). Jan 15 00:47:48.703943 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 52118 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:48.705636 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:48.712204 systemd-logind[1618]: New session 16 of user core. Jan 15 00:47:48.719948 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 00:47:48.808069 sshd[4228]: Connection closed by 10.0.0.1 port 52118 Jan 15 00:47:48.808494 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:48.813643 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:52118.service: Deactivated successfully. Jan 15 00:47:48.816058 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 00:47:48.817894 systemd-logind[1618]: Session 16 logged out. Waiting for processes to exit. Jan 15 00:47:48.819644 systemd-logind[1618]: Removed session 16. Jan 15 00:47:53.830136 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:36614.service - OpenSSH per-connection server daemon (10.0.0.1:36614). Jan 15 00:47:53.897643 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 36614 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:53.899382 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:53.906226 systemd-logind[1618]: New session 17 of user core. Jan 15 00:47:53.915764 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 00:47:54.005355 sshd[4268]: Connection closed by 10.0.0.1 port 36614 Jan 15 00:47:54.006283 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:54.012107 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:36614.service: Deactivated successfully. Jan 15 00:47:54.014603 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 00:47:54.016385 systemd-logind[1618]: Session 17 logged out. Waiting for processes to exit. Jan 15 00:47:54.017631 systemd-logind[1618]: Removed session 17. Jan 15 00:47:59.024060 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:36618.service - OpenSSH per-connection server daemon (10.0.0.1:36618). Jan 15 00:47:59.081112 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 36618 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:47:59.082920 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:47:59.089302 systemd-logind[1618]: New session 18 of user core. Jan 15 00:47:59.103862 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 00:47:59.193223 sshd[4313]: Connection closed by 10.0.0.1 port 36618 Jan 15 00:47:59.193854 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jan 15 00:47:59.199840 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:36618.service: Deactivated successfully. Jan 15 00:47:59.202007 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 00:47:59.203402 systemd-logind[1618]: Session 18 logged out. Waiting for processes to exit. Jan 15 00:47:59.204777 update_engine[1620]: I20260115 00:47:59.204713 1620 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 15 00:47:59.204777 update_engine[1620]: I20260115 00:47:59.204767 1620 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 15 00:47:59.205136 update_engine[1620]: I20260115 00:47:59.204974 1620 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 15 00:47:59.204979 systemd-logind[1618]: Removed session 18. Jan 15 00:47:59.205488 update_engine[1620]: I20260115 00:47:59.205418 1620 omaha_request_params.cc:62] Current group set to beta Jan 15 00:47:59.205724 update_engine[1620]: I20260115 00:47:59.205587 1620 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 15 00:47:59.205724 update_engine[1620]: I20260115 00:47:59.205600 1620 update_attempter.cc:643] Scheduling an action processor start. Jan 15 00:47:59.205724 update_engine[1620]: I20260115 00:47:59.205613 1620 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 00:47:59.206031 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 15 00:47:59.209176 update_engine[1620]: I20260115 00:47:59.209105 1620 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 15 00:47:59.209279 update_engine[1620]: I20260115 00:47:59.209228 1620 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 00:47:59.209327 update_engine[1620]: I20260115 00:47:59.209273 1620 omaha_request_action.cc:272] Request: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: Jan 15 00:47:59.209327 update_engine[1620]: I20260115 00:47:59.209285 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 00:47:59.210566 update_engine[1620]: I20260115 00:47:59.210479 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 00:47:59.211481 update_engine[1620]: I20260115 00:47:59.211397 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 00:47:59.227208 update_engine[1620]: E20260115 00:47:59.227124 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 00:47:59.227267 update_engine[1620]: I20260115 00:47:59.227216 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 15 00:48:04.211096 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:35718.service - OpenSSH per-connection server daemon (10.0.0.1:35718). Jan 15 00:48:04.280155 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 35718 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:04.282081 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:04.289375 systemd-logind[1618]: New session 19 of user core. Jan 15 00:48:04.299830 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 00:48:04.389987 sshd[4351]: Connection closed by 10.0.0.1 port 35718 Jan 15 00:48:04.390344 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:04.403656 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:35718.service: Deactivated successfully. Jan 15 00:48:04.405971 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 00:48:04.407473 systemd-logind[1618]: Session 19 logged out. Waiting for processes to exit. Jan 15 00:48:04.410408 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:35720.service - OpenSSH per-connection server daemon (10.0.0.1:35720). Jan 15 00:48:04.411634 systemd-logind[1618]: Removed session 19. Jan 15 00:48:04.479282 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 35720 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:04.481416 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:04.487227 systemd-logind[1618]: New session 20 of user core. Jan 15 00:48:04.500991 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 00:48:04.723932 sshd[4382]: Connection closed by 10.0.0.1 port 35720 Jan 15 00:48:04.724434 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:04.735286 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:35720.service: Deactivated successfully. Jan 15 00:48:04.737960 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 00:48:04.739217 systemd-logind[1618]: Session 20 logged out. Waiting for processes to exit. Jan 15 00:48:04.743370 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:35732.service - OpenSSH per-connection server daemon (10.0.0.1:35732). Jan 15 00:48:04.744638 systemd-logind[1618]: Removed session 20. Jan 15 00:48:04.808060 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 35732 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:04.809754 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:04.816667 systemd-logind[1618]: New session 21 of user core. Jan 15 00:48:04.826771 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 00:48:05.383593 sshd[4396]: Connection closed by 10.0.0.1 port 35732 Jan 15 00:48:05.384011 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:05.394651 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:35732.service: Deactivated successfully. Jan 15 00:48:05.397110 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 00:48:05.400589 systemd-logind[1618]: Session 21 logged out. Waiting for processes to exit. Jan 15 00:48:05.404982 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Jan 15 00:48:05.405887 systemd-logind[1618]: Removed session 21. Jan 15 00:48:05.475309 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:05.477296 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:05.484032 systemd-logind[1618]: New session 22 of user core. Jan 15 00:48:05.498549 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 00:48:05.704232 sshd[4419]: Connection closed by 10.0.0.1 port 35742 Jan 15 00:48:05.704804 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:05.718048 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:35742.service: Deactivated successfully. Jan 15 00:48:05.720632 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 00:48:05.722923 systemd-logind[1618]: Session 22 logged out. Waiting for processes to exit. Jan 15 00:48:05.726325 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:35744.service - OpenSSH per-connection server daemon (10.0.0.1:35744). Jan 15 00:48:05.727642 systemd-logind[1618]: Removed session 22. Jan 15 00:48:05.789203 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 35744 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:05.791355 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:05.798478 systemd-logind[1618]: New session 23 of user core. Jan 15 00:48:05.806772 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 00:48:05.894397 sshd[4434]: Connection closed by 10.0.0.1 port 35744 Jan 15 00:48:05.894939 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:05.899483 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:35744.service: Deactivated successfully. Jan 15 00:48:05.902051 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 00:48:05.904133 systemd-logind[1618]: Session 23 logged out. Waiting for processes to exit. Jan 15 00:48:05.906226 systemd-logind[1618]: Removed session 23. Jan 15 00:48:09.208094 update_engine[1620]: I20260115 00:48:09.207940 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 00:48:09.208094 update_engine[1620]: I20260115 00:48:09.208066 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 00:48:09.208813 update_engine[1620]: I20260115 00:48:09.208730 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 00:48:09.227472 update_engine[1620]: E20260115 00:48:09.227383 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 00:48:09.227676 update_engine[1620]: I20260115 00:48:09.227599 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 15 00:48:10.918756 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:35760.service - OpenSSH per-connection server daemon (10.0.0.1:35760). Jan 15 00:48:10.985214 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 35760 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:10.987777 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:11.000465 systemd-logind[1618]: New session 24 of user core. Jan 15 00:48:11.012013 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 00:48:11.109643 sshd[4471]: Connection closed by 10.0.0.1 port 35760 Jan 15 00:48:11.110141 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:11.116862 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:35760.service: Deactivated successfully. Jan 15 00:48:11.119980 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 00:48:11.121555 systemd-logind[1618]: Session 24 logged out. Waiting for processes to exit. Jan 15 00:48:11.123355 systemd-logind[1618]: Removed session 24. Jan 15 00:48:16.127609 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:50300.service - OpenSSH per-connection server daemon (10.0.0.1:50300). Jan 15 00:48:16.194423 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 50300 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:16.196735 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:16.204212 systemd-logind[1618]: New session 25 of user core. Jan 15 00:48:16.221891 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 00:48:16.303336 sshd[4508]: Connection closed by 10.0.0.1 port 50300 Jan 15 00:48:16.303794 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:16.309407 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:50300.service: Deactivated successfully. Jan 15 00:48:16.312018 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 00:48:16.313474 systemd-logind[1618]: Session 25 logged out. Waiting for processes to exit. Jan 15 00:48:16.315092 systemd-logind[1618]: Removed session 25. Jan 15 00:48:18.236118 kubelet[2763]: E0115 00:48:18.236009 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:48:19.207839 update_engine[1620]: I20260115 00:48:19.207603 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 00:48:19.207839 update_engine[1620]: I20260115 00:48:19.207799 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 00:48:19.208345 update_engine[1620]: I20260115 00:48:19.208292 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 00:48:19.227852 update_engine[1620]: E20260115 00:48:19.227666 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 00:48:19.227984 update_engine[1620]: I20260115 00:48:19.227939 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 15 00:48:19.236256 kubelet[2763]: E0115 00:48:19.236165 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:48:21.322123 systemd[1]: Started sshd@25-10.0.0.109:22-10.0.0.1:50308.service - OpenSSH per-connection server daemon (10.0.0.1:50308). Jan 15 00:48:21.388332 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 50308 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:21.390125 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:21.396431 systemd-logind[1618]: New session 26 of user core. Jan 15 00:48:21.413808 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 00:48:21.500138 sshd[4548]: Connection closed by 10.0.0.1 port 50308 Jan 15 00:48:21.500575 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:21.507677 systemd[1]: sshd@25-10.0.0.109:22-10.0.0.1:50308.service: Deactivated successfully. Jan 15 00:48:21.510688 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 00:48:21.512319 systemd-logind[1618]: Session 26 logged out. Waiting for processes to exit. Jan 15 00:48:21.514783 systemd-logind[1618]: Removed session 26. Jan 15 00:48:26.515043 systemd[1]: Started sshd@26-10.0.0.109:22-10.0.0.1:55380.service - OpenSSH per-connection server daemon (10.0.0.1:55380). Jan 15 00:48:26.577157 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 55380 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:26.579138 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:26.585571 systemd-logind[1618]: New session 27 of user core. Jan 15 00:48:26.601791 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 00:48:26.680234 sshd[4588]: Connection closed by 10.0.0.1 port 55380 Jan 15 00:48:26.680747 sshd-session[4585]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:26.686115 systemd[1]: sshd@26-10.0.0.109:22-10.0.0.1:55380.service: Deactivated successfully. Jan 15 00:48:26.688906 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 00:48:26.690204 systemd-logind[1618]: Session 27 logged out. Waiting for processes to exit. Jan 15 00:48:26.692263 systemd-logind[1618]: Removed session 27. Jan 15 00:48:27.236564 kubelet[2763]: E0115 00:48:27.236399 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:48:29.211461 update_engine[1620]: I20260115 00:48:29.211294 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 00:48:29.211461 update_engine[1620]: I20260115 00:48:29.211447 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 00:48:29.212161 update_engine[1620]: I20260115 00:48:29.212080 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 00:48:29.229827 update_engine[1620]: E20260115 00:48:29.229481 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 00:48:29.229827 update_engine[1620]: I20260115 00:48:29.229809 1620 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 00:48:29.229827 update_engine[1620]: I20260115 00:48:29.229825 1620 omaha_request_action.cc:617] Omaha request response: Jan 15 00:48:29.230020 update_engine[1620]: E20260115 00:48:29.229936 1620 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 15 00:48:29.230020 update_engine[1620]: I20260115 00:48:29.229966 1620 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 15 00:48:29.230020 update_engine[1620]: I20260115 00:48:29.229979 1620 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 00:48:29.230020 update_engine[1620]: I20260115 00:48:29.229990 1620 update_attempter.cc:306] Processing Done. Jan 15 00:48:29.230020 update_engine[1620]: E20260115 00:48:29.230007 1620 update_attempter.cc:619] Update failed. Jan 15 00:48:29.230020 update_engine[1620]: I20260115 00:48:29.230017 1620 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 15 00:48:29.230184 update_engine[1620]: I20260115 00:48:29.230027 1620 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 15 00:48:29.230184 update_engine[1620]: I20260115 00:48:29.230036 1620 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 15 00:48:29.230250 update_engine[1620]: I20260115 00:48:29.230173 1620 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 00:48:29.230250 update_engine[1620]: I20260115 00:48:29.230222 1620 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 00:48:29.230250 update_engine[1620]: I20260115 00:48:29.230235 1620 omaha_request_action.cc:272] Request: Jan 15 00:48:29.230250 update_engine[1620]: Jan 15 00:48:29.230250 update_engine[1620]: Jan 15 00:48:29.230250 update_engine[1620]: Jan 15 00:48:29.230250 update_engine[1620]: Jan 15 00:48:29.230250 update_engine[1620]: Jan 15 00:48:29.230250 update_engine[1620]: Jan 15 00:48:29.230250 update_engine[1620]: I20260115 00:48:29.230247 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 00:48:29.230779 update_engine[1620]: I20260115 00:48:29.230278 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 00:48:29.230900 update_engine[1620]: I20260115 00:48:29.230851 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 00:48:29.230936 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 15 00:48:29.249088 update_engine[1620]: E20260115 00:48:29.248929 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249045 1620 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249058 1620 omaha_request_action.cc:617] Omaha request response: Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249068 1620 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249073 1620 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249079 1620 update_attempter.cc:306] Processing Done. Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249087 1620 update_attempter.cc:310] Error event sent. Jan 15 00:48:29.249088 update_engine[1620]: I20260115 00:48:29.249097 1620 update_check_scheduler.cc:74] Next update check in 48m59s Jan 15 00:48:29.249998 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 15 00:48:30.236272 kubelet[2763]: E0115 00:48:30.236142 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:48:30.512634 kernel: hrtimer: interrupt took 1842379 ns Jan 15 00:48:31.702450 systemd[1]: Started sshd@27-10.0.0.109:22-10.0.0.1:55390.service - OpenSSH per-connection server daemon (10.0.0.1:55390). Jan 15 00:48:31.784644 sshd[4622]: Accepted publickey for core from 10.0.0.1 port 55390 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:31.786480 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:31.795203 systemd-logind[1618]: New session 28 of user core. Jan 15 00:48:31.808117 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 15 00:48:31.921669 sshd[4625]: Connection closed by 10.0.0.1 port 55390 Jan 15 00:48:31.922140 sshd-session[4622]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:31.928896 systemd[1]: sshd@27-10.0.0.109:22-10.0.0.1:55390.service: Deactivated successfully. Jan 15 00:48:31.932705 systemd[1]: session-28.scope: Deactivated successfully. Jan 15 00:48:31.934675 systemd-logind[1618]: Session 28 logged out. Waiting for processes to exit. Jan 15 00:48:31.936795 systemd-logind[1618]: Removed session 28. Jan 15 00:48:36.938104 systemd[1]: Started sshd@28-10.0.0.109:22-10.0.0.1:48714.service - OpenSSH per-connection server daemon (10.0.0.1:48714). Jan 15 00:48:37.030329 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 48714 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:48:37.032276 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:48:37.040809 systemd-logind[1618]: New session 29 of user core. Jan 15 00:48:37.054868 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 15 00:48:37.165861 sshd[4663]: Connection closed by 10.0.0.1 port 48714 Jan 15 00:48:37.166093 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Jan 15 00:48:37.172711 systemd[1]: sshd@28-10.0.0.109:22-10.0.0.1:48714.service: Deactivated successfully. Jan 15 00:48:37.175898 systemd[1]: session-29.scope: Deactivated successfully. Jan 15 00:48:37.177556 systemd-logind[1618]: Session 29 logged out. Waiting for processes to exit. Jan 15 00:48:37.179304 systemd-logind[1618]: Removed session 29.