Jan 20 14:57:49.022480 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 12:22:36 -00 2026 Jan 20 14:57:49.022517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 14:57:49.022531 kernel: BIOS-provided physical RAM map: Jan 20 14:57:49.022549 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 14:57:49.022559 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 14:57:49.022568 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 14:57:49.022580 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 14:57:49.022591 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 14:57:49.022659 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 14:57:49.022671 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 14:57:49.022681 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 14:57:49.022697 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 14:57:49.022707 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 14:57:49.022717 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 14:57:49.022730 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 14:57:49.022740 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 14:57:49.022852 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 14:57:49.022866 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 14:57:49.022877 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 14:57:49.022888 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 14:57:49.022899 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 14:57:49.022912 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 14:57:49.022922 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 14:57:49.022934 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 14:57:49.022946 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 14:57:49.022956 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 14:57:49.022973 kernel: NX (Execute Disable) protection: active Jan 20 14:57:49.022983 kernel: APIC: Static calls initialized Jan 20 14:57:49.022995 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 14:57:49.023007 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 14:57:49.023020 kernel: extended physical RAM map: Jan 20 14:57:49.023031 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 14:57:49.023043 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 14:57:49.023055 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 14:57:49.023066 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 14:57:49.023077 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 14:57:49.023087 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 14:57:49.023103 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 14:57:49.023114 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 14:57:49.023124 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 14:57:49.023141 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 14:57:49.023156 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 14:57:49.023167 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 14:57:49.023263 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 14:57:49.023271 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 14:57:49.023278 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 14:57:49.023285 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 14:57:49.023292 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 14:57:49.023299 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 14:57:49.023306 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 14:57:49.023318 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 14:57:49.023325 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 14:57:49.023332 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 14:57:49.023339 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 14:57:49.023346 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 14:57:49.023353 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 14:57:49.023360 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 14:57:49.023367 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 14:57:49.023410 kernel: efi: EFI v2.7 by EDK II Jan 20 14:57:49.023417 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 14:57:49.023454 kernel: random: crng init done Jan 20 14:57:49.023465 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 14:57:49.023500 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 14:57:49.023508 kernel: secureboot: Secure boot disabled Jan 20 14:57:49.023515 kernel: SMBIOS 2.8 present. Jan 20 14:57:49.023522 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 14:57:49.023529 kernel: DMI: Memory slots populated: 1/1 Jan 20 14:57:49.023536 kernel: Hypervisor detected: KVM Jan 20 14:57:49.023543 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 14:57:49.023550 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 14:57:49.023557 kernel: kvm-clock: using sched offset of 11243479322 cycles Jan 20 14:57:49.023566 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 14:57:49.023576 kernel: tsc: Detected 2445.424 MHz processor Jan 20 14:57:49.023584 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 14:57:49.023591 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 14:57:49.023599 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 14:57:49.023606 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 14:57:49.023614 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 14:57:49.023621 kernel: Using GB pages for direct mapping Jan 20 14:57:49.023634 kernel: ACPI: Early table checksum verification disabled Jan 20 14:57:49.023648 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 14:57:49.023660 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 14:57:49.023673 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 14:57:49.023686 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 14:57:49.023698 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 14:57:49.023710 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 14:57:49.023727 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 14:57:49.023739 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 14:57:49.023752 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 14:57:49.023764 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 14:57:49.023776 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 14:57:49.023848 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 14:57:49.023862 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 14:57:49.023880 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 14:57:49.023892 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 14:57:49.023904 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 14:57:49.023915 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 14:57:49.023927 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 14:57:49.023938 kernel: No NUMA configuration found Jan 20 14:57:49.023950 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 14:57:49.023962 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 14:57:49.023980 kernel: Zone ranges: Jan 20 14:57:49.023992 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 14:57:49.024006 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 14:57:49.024018 kernel: Normal empty Jan 20 14:57:49.024031 kernel: Device empty Jan 20 14:57:49.024044 kernel: Movable zone start for each node Jan 20 14:57:49.024058 kernel: Early memory node ranges Jan 20 14:57:49.024070 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 14:57:49.024143 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 14:57:49.024158 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 14:57:49.024262 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 14:57:49.024276 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 14:57:49.024288 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 14:57:49.024300 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 14:57:49.024311 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 14:57:49.024363 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 14:57:49.024381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 14:57:49.024405 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 14:57:49.024421 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 14:57:49.024433 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 14:57:49.024445 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 14:57:49.024457 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 14:57:49.024469 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 14:57:49.024481 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 14:57:49.024493 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 14:57:49.024510 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 14:57:49.024523 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 14:57:49.024535 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 14:57:49.024547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 14:57:49.024563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 14:57:49.024576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 14:57:49.024590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 14:57:49.024602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 14:57:49.024617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 14:57:49.024630 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 14:57:49.024644 kernel: TSC deadline timer available Jan 20 14:57:49.024661 kernel: CPU topo: Max. logical packages: 1 Jan 20 14:57:49.024674 kernel: CPU topo: Max. logical dies: 1 Jan 20 14:57:49.024686 kernel: CPU topo: Max. dies per package: 1 Jan 20 14:57:49.024697 kernel: CPU topo: Max. threads per core: 1 Jan 20 14:57:49.024710 kernel: CPU topo: Num. cores per package: 4 Jan 20 14:57:49.024721 kernel: CPU topo: Num. threads per package: 4 Jan 20 14:57:49.024734 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 14:57:49.024744 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 14:57:49.024755 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 14:57:49.024763 kernel: kvm-guest: setup PV sched yield Jan 20 14:57:49.024770 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 14:57:49.024778 kernel: Booting paravirtualized kernel on KVM Jan 20 14:57:49.024840 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 14:57:49.024850 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 14:57:49.024858 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 14:57:49.024870 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 14:57:49.024878 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 14:57:49.024885 kernel: kvm-guest: PV spinlocks enabled Jan 20 14:57:49.024893 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 14:57:49.024938 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 14:57:49.024947 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 14:57:49.024958 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 14:57:49.024966 kernel: Fallback order for Node 0: 0 Jan 20 14:57:49.024974 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 14:57:49.024982 kernel: Policy zone: DMA32 Jan 20 14:57:49.024989 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 14:57:49.025003 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 14:57:49.025018 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 14:57:49.025037 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 14:57:49.025052 kernel: Dynamic Preempt: voluntary Jan 20 14:57:49.025065 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 14:57:49.025079 kernel: rcu: RCU event tracing is enabled. Jan 20 14:57:49.025094 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 14:57:49.025106 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 14:57:49.025119 kernel: Rude variant of Tasks RCU enabled. Jan 20 14:57:49.025131 kernel: Tracing variant of Tasks RCU enabled. Jan 20 14:57:49.025149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 14:57:49.025157 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 14:57:49.025250 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 14:57:49.025260 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 14:57:49.025268 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 14:57:49.025308 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 14:57:49.025315 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 14:57:49.025328 kernel: Console: colour dummy device 80x25 Jan 20 14:57:49.025336 kernel: printk: legacy console [ttyS0] enabled Jan 20 14:57:49.025344 kernel: ACPI: Core revision 20240827 Jan 20 14:57:49.025351 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 14:57:49.025359 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 14:57:49.025367 kernel: x2apic enabled Jan 20 14:57:49.025374 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 14:57:49.025385 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 14:57:49.025393 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 14:57:49.025400 kernel: kvm-guest: setup PV IPIs Jan 20 14:57:49.025408 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 14:57:49.025416 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 20 14:57:49.025424 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 14:57:49.025432 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 14:57:49.025443 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 14:57:49.025451 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 14:57:49.025458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 14:57:49.025466 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 14:57:49.025474 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 14:57:49.025481 kernel: Speculative Store Bypass: Vulnerable Jan 20 14:57:49.025489 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 14:57:49.025501 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 14:57:49.025542 kernel: active return thunk: srso_alias_return_thunk Jan 20 14:57:49.025550 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 14:57:49.025558 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 14:57:49.025566 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 14:57:49.025574 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 14:57:49.025581 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 14:57:49.025593 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 14:57:49.025600 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 14:57:49.025608 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 14:57:49.025616 kernel: Freeing SMP alternatives memory: 32K Jan 20 14:57:49.025624 kernel: pid_max: default: 32768 minimum: 301 Jan 20 14:57:49.025632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 14:57:49.025639 kernel: landlock: Up and running. Jan 20 14:57:49.025650 kernel: SELinux: Initializing. Jan 20 14:57:49.025657 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 14:57:49.025665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 14:57:49.025673 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 14:57:49.025681 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 14:57:49.025689 kernel: signal: max sigframe size: 1776 Jan 20 14:57:49.025696 kernel: rcu: Hierarchical SRCU implementation. Jan 20 14:57:49.025707 kernel: rcu: Max phase no-delay instances is 400. Jan 20 14:57:49.025715 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 14:57:49.025723 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 14:57:49.025730 kernel: smp: Bringing up secondary CPUs ... Jan 20 14:57:49.025738 kernel: smpboot: x86: Booting SMP configuration: Jan 20 14:57:49.025746 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 14:57:49.025753 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 14:57:49.025764 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 14:57:49.025855 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120816K reserved, 0K cma-reserved) Jan 20 14:57:49.025872 kernel: devtmpfs: initialized Jan 20 14:57:49.025887 kernel: x86/mm: Memory block size: 128MB Jan 20 14:57:49.025897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 14:57:49.025905 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 14:57:49.025912 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 14:57:49.025925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 14:57:49.025933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 14:57:49.025941 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 14:57:49.025949 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 14:57:49.025957 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 14:57:49.025964 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 14:57:49.025972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 14:57:49.025982 kernel: audit: initializing netlink subsys (disabled) Jan 20 14:57:49.025991 kernel: audit: type=2000 audit(1768921061.233:1): state=initialized audit_enabled=0 res=1 Jan 20 14:57:49.026006 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 14:57:49.026020 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 14:57:49.026032 kernel: cpuidle: using governor menu Jan 20 14:57:49.026047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 14:57:49.026062 kernel: dca service started, version 1.12.1 Jan 20 14:57:49.026083 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 14:57:49.026098 kernel: PCI: Using configuration type 1 for base access Jan 20 14:57:49.026111 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 14:57:49.026125 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 14:57:49.026138 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 14:57:49.026151 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 14:57:49.026164 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 14:57:49.026285 kernel: ACPI: Added _OSI(Module Device) Jan 20 14:57:49.026299 kernel: ACPI: Added _OSI(Processor Device) Jan 20 14:57:49.026312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 14:57:49.026325 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 14:57:49.026338 kernel: ACPI: Interpreter enabled Jan 20 14:57:49.026351 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 14:57:49.026364 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 14:57:49.026377 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 14:57:49.026395 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 14:57:49.026408 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 14:57:49.026421 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 14:57:49.026935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 14:57:49.027311 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 14:57:49.027667 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 14:57:49.027685 kernel: PCI host bridge to bus 0000:00 Jan 20 14:57:49.028021 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 14:57:49.028550 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 14:57:49.028781 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 14:57:49.029125 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 14:57:49.029727 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 14:57:49.030048 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 14:57:49.030445 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 14:57:49.030763 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 14:57:49.031138 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 14:57:49.031572 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 14:57:49.031923 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 14:57:49.032297 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 14:57:49.032550 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 14:57:49.033068 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 14:57:49.033440 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 14:57:49.033923 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 14:57:49.034395 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 14:57:49.034768 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 14:57:49.035155 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 14:57:49.035503 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 14:57:49.037505 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 14:57:49.038515 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 14:57:49.038734 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 14:57:49.039043 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 14:57:49.039367 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 14:57:49.039582 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 14:57:49.039863 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 14:57:49.040085 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 14:57:49.040438 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 14:57:49.040651 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 14:57:49.040917 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 14:57:49.041139 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 14:57:49.041446 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 14:57:49.041459 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 14:57:49.041468 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 14:57:49.041476 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 14:57:49.041484 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 14:57:49.041492 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 14:57:49.041500 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 14:57:49.041512 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 14:57:49.041520 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 14:57:49.041528 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 14:57:49.041536 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 14:57:49.041544 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 14:57:49.041552 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 14:57:49.041560 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 14:57:49.041570 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 14:57:49.041578 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 14:57:49.041586 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 14:57:49.041594 kernel: iommu: Default domain type: Translated Jan 20 14:57:49.041602 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 14:57:49.041610 kernel: efivars: Registered efivars operations Jan 20 14:57:49.041617 kernel: PCI: Using ACPI for IRQ routing Jan 20 14:57:49.041628 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 14:57:49.041636 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 14:57:49.041643 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 14:57:49.041651 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 14:57:49.041659 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 14:57:49.041666 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 14:57:49.041674 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 14:57:49.041685 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 14:57:49.041692 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 14:57:49.041961 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 14:57:49.042242 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 14:57:49.042458 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 14:57:49.042469 kernel: vgaarb: loaded Jan 20 14:57:49.042482 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 14:57:49.042490 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 14:57:49.042498 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 14:57:49.042506 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 14:57:49.042514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 14:57:49.042522 kernel: pnp: PnP ACPI init Jan 20 14:57:49.042746 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 14:57:49.042761 kernel: pnp: PnP ACPI: found 6 devices Jan 20 14:57:49.042770 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 14:57:49.042777 kernel: NET: Registered PF_INET protocol family Jan 20 14:57:49.042845 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 14:57:49.042854 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 14:57:49.042862 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 14:57:49.042870 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 14:57:49.042899 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 14:57:49.042910 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 14:57:49.042918 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 14:57:49.042926 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 14:57:49.042934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 14:57:49.042943 kernel: NET: Registered PF_XDP protocol family Jan 20 14:57:49.043158 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 14:57:49.043459 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 14:57:49.043656 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 14:57:49.043906 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 14:57:49.044102 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 14:57:49.044376 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 14:57:49.044571 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 14:57:49.044770 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 14:57:49.044784 kernel: PCI: CLS 0 bytes, default 64 Jan 20 14:57:49.044849 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 20 14:57:49.044858 kernel: Initialise system trusted keyrings Jan 20 14:57:49.044866 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 14:57:49.044875 kernel: Key type asymmetric registered Jan 20 14:57:49.044883 kernel: Asymmetric key parser 'x509' registered Jan 20 14:57:49.044895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 14:57:49.044903 kernel: io scheduler mq-deadline registered Jan 20 14:57:49.044911 kernel: io scheduler kyber registered Jan 20 14:57:49.044919 kernel: io scheduler bfq registered Jan 20 14:57:49.044927 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 14:57:49.044936 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 14:57:49.044944 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 14:57:49.044955 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 14:57:49.044966 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 14:57:49.044977 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 14:57:49.044986 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 14:57:49.044994 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 14:57:49.045005 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 14:57:49.045372 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 14:57:49.045387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 14:57:49.045592 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 14:57:49.045859 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T14:57:44 UTC (1768921064) Jan 20 14:57:49.046089 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 14:57:49.046107 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 14:57:49.046116 kernel: efifb: probing for efifb Jan 20 14:57:49.046124 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 14:57:49.046132 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 14:57:49.046140 kernel: efifb: scrolling: redraw Jan 20 14:57:49.046149 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 14:57:49.046157 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 14:57:49.046255 kernel: fb0: EFI VGA frame buffer device Jan 20 14:57:49.046265 kernel: pstore: Using crash dump compression: deflate Jan 20 14:57:49.046274 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 14:57:49.046282 kernel: NET: Registered PF_INET6 protocol family Jan 20 14:57:49.046290 kernel: Segment Routing with IPv6 Jan 20 14:57:49.046298 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 14:57:49.046307 kernel: NET: Registered PF_PACKET protocol family Jan 20 14:57:49.046319 kernel: Key type dns_resolver registered Jan 20 14:57:49.046327 kernel: IPI shorthand broadcast: enabled Jan 20 14:57:49.046335 kernel: sched_clock: Marking stable (4853046830, 644550601)->(5689020264, -191422833) Jan 20 14:57:49.046343 kernel: registered taskstats version 1 Jan 20 14:57:49.046352 kernel: Loading compiled-in X.509 certificates Jan 20 14:57:49.046360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 34a030021dd6c1575d5ad60346eaf4cdadaee6ef' Jan 20 14:57:49.046368 kernel: Demotion targets for Node 0: null Jan 20 14:57:49.046379 kernel: Key type .fscrypt registered Jan 20 14:57:49.046387 kernel: Key type fscrypt-provisioning registered Jan 20 14:57:49.046395 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 14:57:49.046403 kernel: ima: Allocated hash algorithm: sha1 Jan 20 14:57:49.046411 kernel: ima: No architecture policies found Jan 20 14:57:49.046419 kernel: clk: Disabling unused clocks Jan 20 14:57:49.046427 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 20 14:57:49.046439 kernel: Write protecting the kernel read-only data: 47104k Jan 20 14:57:49.046447 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 14:57:49.046455 kernel: Run /init as init process Jan 20 14:57:49.046467 kernel: with arguments: Jan 20 14:57:49.046482 kernel: /init Jan 20 14:57:49.046497 kernel: with environment: Jan 20 14:57:49.046511 kernel: HOME=/ Jan 20 14:57:49.046520 kernel: TERM=linux Jan 20 14:57:49.046532 kernel: SCSI subsystem initialized Jan 20 14:57:49.046540 kernel: libata version 3.00 loaded. Jan 20 14:57:49.046889 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 14:57:49.046903 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 14:57:49.047111 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 14:57:49.047438 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 14:57:49.047648 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 14:57:49.048119 kernel: scsi host0: ahci Jan 20 14:57:49.048430 kernel: scsi host1: ahci Jan 20 14:57:49.048702 kernel: scsi host2: ahci Jan 20 14:57:49.048982 kernel: scsi host3: ahci Jan 20 14:57:49.049323 kernel: scsi host4: ahci Jan 20 14:57:49.049638 kernel: scsi host5: ahci Jan 20 14:57:49.049652 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 20 14:57:49.049661 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 20 14:57:49.049670 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 20 14:57:49.049678 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 20 14:57:49.049686 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 20 14:57:49.049699 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 20 14:57:49.049707 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 14:57:49.049715 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 14:57:49.049723 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 14:57:49.049731 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 14:57:49.049740 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 14:57:49.049748 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 14:57:49.049759 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 14:57:49.049767 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 14:57:49.049775 kernel: ata3.00: applying bridge limits Jan 20 14:57:49.049783 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 14:57:49.049844 kernel: ata3.00: configured for UDMA/100 Jan 20 14:57:49.050141 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 14:57:49.050542 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 14:57:49.050767 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 14:57:49.050779 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 14:57:49.050852 kernel: GPT:16515071 != 27000831 Jan 20 14:57:49.050862 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 14:57:49.051103 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 14:57:49.051116 kernel: GPT:16515071 != 27000831 Jan 20 14:57:49.051129 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 14:57:49.051137 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 14:57:49.051145 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 14:57:49.051922 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 14:57:49.051939 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 14:57:49.051947 kernel: device-mapper: uevent: version 1.0.3 Jan 20 14:57:49.051956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 14:57:49.051970 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 14:57:49.051978 kernel: raid6: avx2x4 gen() 31144 MB/s Jan 20 14:57:49.051986 kernel: raid6: avx2x2 gen() 30429 MB/s Jan 20 14:57:49.051994 kernel: raid6: avx2x1 gen() 21700 MB/s Jan 20 14:57:49.052002 kernel: raid6: using algorithm avx2x4 gen() 31144 MB/s Jan 20 14:57:49.052011 kernel: raid6: .... xor() 5260 MB/s, rmw enabled Jan 20 14:57:49.052019 kernel: raid6: using avx2x2 recovery algorithm Jan 20 14:57:49.052030 kernel: xor: automatically using best checksumming function avx Jan 20 14:57:49.052039 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 14:57:49.052047 kernel: BTRFS: device fsid 17137bed-8163-406c-98f9-6d4bb6770bf0 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (182) Jan 20 14:57:49.052055 kernel: BTRFS info (device dm-0): first mount of filesystem 17137bed-8163-406c-98f9-6d4bb6770bf0 Jan 20 14:57:49.052063 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 14:57:49.052071 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 14:57:49.052079 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 14:57:49.052090 kernel: loop: module loaded Jan 20 14:57:49.052098 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 14:57:49.052107 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 14:57:49.052116 systemd[1]: Successfully made /usr/ read-only. Jan 20 14:57:49.052127 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 14:57:49.052137 systemd[1]: Detected virtualization kvm. Jan 20 14:57:49.052148 systemd[1]: Detected architecture x86-64. Jan 20 14:57:49.052159 systemd[1]: Running in initrd. Jan 20 14:57:49.052242 systemd[1]: No hostname configured, using default hostname. Jan 20 14:57:49.052253 systemd[1]: Hostname set to . Jan 20 14:57:49.052262 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 14:57:49.052271 systemd[1]: Queued start job for default target initrd.target. Jan 20 14:57:49.052283 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 14:57:49.052292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 14:57:49.052300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 14:57:49.052310 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 14:57:49.052319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 14:57:49.052328 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 14:57:49.052339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 14:57:49.052348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 14:57:49.052357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 14:57:49.052365 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 14:57:49.052374 systemd[1]: Reached target paths.target - Path Units. Jan 20 14:57:49.052382 systemd[1]: Reached target slices.target - Slice Units. Jan 20 14:57:49.052391 systemd[1]: Reached target swap.target - Swaps. Jan 20 14:57:49.052408 systemd[1]: Reached target timers.target - Timer Units. Jan 20 14:57:49.052423 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 14:57:49.052439 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 14:57:49.052450 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 14:57:49.052459 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 14:57:49.052468 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 14:57:49.052476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 14:57:49.052489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 14:57:49.052497 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 14:57:49.052506 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 14:57:49.052514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 14:57:49.052523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 14:57:49.052531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 14:57:49.052543 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 14:57:49.052552 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 14:57:49.052561 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 14:57:49.052570 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 14:57:49.052579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 14:57:49.052591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 14:57:49.052599 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 14:57:49.052608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 14:57:49.052616 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 14:57:49.052657 systemd-journald[318]: Collecting audit messages is enabled. Jan 20 14:57:49.052682 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 14:57:49.052691 systemd-journald[318]: Journal started Jan 20 14:57:49.052712 systemd-journald[318]: Runtime Journal (/run/log/journal/4e74a44ad4524f1fa08c7ed2bddc6427) is 6M, max 48M, 42M free. Jan 20 14:57:49.057506 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 14:57:49.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.061590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 14:57:49.203497 kernel: audit: type=1130 audit(1768921069.057:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.318440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 14:57:49.319833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:57:49.335364 kernel: Bridge firewalling registered Jan 20 14:57:49.331937 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 14:57:49.359476 kernel: audit: type=1130 audit(1768921069.335:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.335339 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 14:57:49.366834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 14:57:49.392694 kernel: audit: type=1130 audit(1768921069.366:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.393098 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 14:57:49.420330 kernel: audit: type=1130 audit(1768921069.393:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.420452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 14:57:49.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.444563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 14:57:49.471709 kernel: audit: type=1130 audit(1768921069.433:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.473361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 14:57:49.519563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 14:57:49.584729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 14:57:49.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.616463 kernel: audit: type=1130 audit(1768921069.590:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.611000 audit: BPF prog-id=6 op=LOAD Jan 20 14:57:49.620674 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 14:57:49.622919 kernel: audit: type=1334 audit(1768921069.611:8): prog-id=6 op=LOAD Jan 20 14:57:49.635696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 14:57:49.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.671579 kernel: audit: type=1130 audit(1768921069.635:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.754748 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 14:57:49.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.837084 kernel: audit: type=1130 audit(1768921069.770:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:49.866957 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 14:57:50.978660 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1787659648 wd_nsec: 1787659004 Jan 20 14:57:51.052962 dracut-cmdline[356]: dracut-109 Jan 20 14:57:51.062014 dracut-cmdline[356]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 14:57:51.134486 systemd-resolved[352]: Positive Trust Anchors: Jan 20 14:57:51.134533 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 14:57:51.134539 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 14:57:51.134567 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 14:57:51.195755 kernel: audit: type=1130 audit(1768921071.180:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:51.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:51.164547 systemd-resolved[352]: Defaulting to hostname 'linux'. Jan 20 14:57:51.172568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 14:57:51.180841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 14:57:51.390415 kernel: Loading iSCSI transport class v2.0-870. Jan 20 14:57:51.421460 kernel: iscsi: registered transport (tcp) Jan 20 14:57:51.461308 kernel: iscsi: registered transport (qla4xxx) Jan 20 14:57:51.461353 kernel: QLogic iSCSI HBA Driver Jan 20 14:57:51.527514 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 14:57:51.638374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 14:57:51.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:51.687824 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 14:57:51.740916 kernel: hrtimer: interrupt took 3936962 ns Jan 20 14:57:52.148152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 14:57:52.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.161767 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 14:57:52.171320 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 14:57:52.243657 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 14:57:52.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.254000 audit: BPF prog-id=7 op=LOAD Jan 20 14:57:52.255000 audit: BPF prog-id=8 op=LOAD Jan 20 14:57:52.256558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 14:57:52.335432 systemd-udevd[588]: Using default interface naming scheme 'v257'. Jan 20 14:57:52.359687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 14:57:52.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.371475 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 14:57:52.434911 dracut-pre-trigger[651]: rd.md=0: removing MD RAID activation Jan 20 14:57:52.457568 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 14:57:52.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.470000 audit: BPF prog-id=9 op=LOAD Jan 20 14:57:52.472008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 14:57:52.511384 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 14:57:52.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.520300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 14:57:52.568293 systemd-networkd[711]: lo: Link UP Jan 20 14:57:52.568331 systemd-networkd[711]: lo: Gained carrier Jan 20 14:57:52.570301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 14:57:52.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.572380 systemd[1]: Reached target network.target - Network. Jan 20 14:57:52.711328 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 14:57:52.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:52.746472 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 14:57:52.784245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 14:57:52.832924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 14:57:52.853027 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 14:57:52.871883 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 14:57:52.882383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 14:57:52.925862 disk-uuid[763]: Primary Header is updated. Jan 20 14:57:52.925862 disk-uuid[763]: Secondary Entries is updated. Jan 20 14:57:52.925862 disk-uuid[763]: Secondary Header is updated. Jan 20 14:57:52.941666 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 14:57:52.945446 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 14:57:52.965259 kernel: AES CTR mode by8 optimization enabled Jan 20 14:57:53.001981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 14:57:53.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:53.002119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:57:53.008062 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 14:57:53.016079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 14:57:53.309452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 14:57:53.310084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:57:53.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:53.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:53.327477 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 14:57:53.327529 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 14:57:53.354884 systemd-networkd[711]: eth0: Link UP Jan 20 14:57:53.362556 systemd-networkd[711]: eth0: Gained carrier Jan 20 14:57:53.362606 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 14:57:53.374575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 14:57:53.408462 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 14:57:53.476559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:57:53.486549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 14:57:53.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:53.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:53.496683 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 14:57:53.521140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 14:57:53.533116 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 14:57:53.546019 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 14:57:53.605409 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 14:57:53.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:54.303261 disk-uuid[764]: Warning: The kernel is still using the old partition table. Jan 20 14:57:54.303261 disk-uuid[764]: The new table will be used at the next reboot or after you Jan 20 14:57:54.303261 disk-uuid[764]: run partprobe(8) or kpartx(8) Jan 20 14:57:54.303261 disk-uuid[764]: The operation has completed successfully. Jan 20 14:57:54.332575 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 14:57:54.332874 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 14:57:54.335084 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 14:57:54.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:54.361042 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 20 14:57:54.361085 kernel: audit: type=1130 audit(1768921074.332:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:54.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:54.382118 kernel: audit: type=1131 audit(1768921074.332:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:54.424372 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 20 14:57:54.434598 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 14:57:54.434628 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 14:57:54.448444 kernel: BTRFS info (device vda6): turning on async discard Jan 20 14:57:54.448475 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 14:57:54.468395 kernel: BTRFS info (device vda6): last unmount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 14:57:54.471760 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 14:57:54.473667 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 14:57:54.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:54.499393 kernel: audit: type=1130 audit(1768921074.471:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:55.039523 systemd-networkd[711]: eth0: Gained IPv6LL Jan 20 14:57:55.145506 ignition[882]: Ignition 2.24.0 Jan 20 14:57:55.145521 ignition[882]: Stage: fetch-offline Jan 20 14:57:55.163568 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 20 14:57:55.163642 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 14:57:55.164083 ignition[882]: parsed url from cmdline: "" Jan 20 14:57:55.164089 ignition[882]: no config URL provided Jan 20 14:57:55.164364 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 14:57:55.164387 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 20 14:57:55.164512 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 20 14:57:55.164522 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 14:57:55.191289 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 20 14:57:55.191396 ignition[882]: QEMU firmware config was not found. Ignoring... Jan 20 14:57:55.317109 ignition[882]: parsing config with SHA512: 4f8ad02986f4e718fb91385dc961d8610ee8c7992f4ea1d3eee4021efc49b2a2d52cbfdeb40147a572f0f88aa3c77cf1afa04c60eed482c6c2606afa098db20d Jan 20 14:57:55.326704 unknown[882]: fetched base config from "system" Jan 20 14:57:55.326776 unknown[882]: fetched user config from "qemu" Jan 20 14:57:55.327920 ignition[882]: fetch-offline: fetch-offline passed Jan 20 14:57:55.328068 ignition[882]: Ignition finished successfully Jan 20 14:57:55.345692 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 14:57:55.375424 kernel: audit: type=1130 audit(1768921075.351:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:55.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:55.351798 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 14:57:55.354122 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 14:57:56.075473 ignition[892]: Ignition 2.24.0 Jan 20 14:57:56.075576 ignition[892]: Stage: kargs Jan 20 14:57:56.076713 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 20 14:57:56.076727 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 14:57:56.079497 ignition[892]: kargs: kargs passed Jan 20 14:57:56.079552 ignition[892]: Ignition finished successfully Jan 20 14:57:56.107505 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 14:57:56.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:56.134334 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 14:57:56.150675 kernel: audit: type=1130 audit(1768921076.120:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:56.384801 ignition[900]: Ignition 2.24.0 Jan 20 14:57:56.384903 ignition[900]: Stage: disks Jan 20 14:57:56.385155 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 20 14:57:56.385272 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 14:57:56.387601 ignition[900]: disks: disks passed Jan 20 14:57:56.387678 ignition[900]: Ignition finished successfully Jan 20 14:57:56.415773 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 14:57:56.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:56.439027 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 14:57:56.449464 kernel: audit: type=1130 audit(1768921076.426:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:56.439429 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 14:57:56.449581 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 14:57:56.470072 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 14:57:56.470312 systemd[1]: Reached target basic.target - Basic System. Jan 20 14:57:56.490094 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 14:57:56.595139 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 14:57:56.603916 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 14:57:56.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:56.618665 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 14:57:56.633671 kernel: audit: type=1130 audit(1768921076.616:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:56.861328 kernel: EXT4-fs (vda9): mounted filesystem 258d228c-90db-4a07-8ba3-cf3df974c261 r/w with ordered data mode. Quota mode: none. Jan 20 14:57:56.862898 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 14:57:56.871107 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 14:57:56.879486 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 14:57:56.905694 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 14:57:56.923353 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Jan 20 14:57:56.911921 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 14:57:56.911972 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 14:57:56.912006 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 14:57:56.929625 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 14:57:56.937912 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 14:57:56.982088 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 14:57:56.982117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 14:57:56.996617 kernel: BTRFS info (device vda6): turning on async discard Jan 20 14:57:56.996654 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 14:57:56.999538 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 14:57:57.727347 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 14:57:57.761865 kernel: audit: type=1130 audit(1768921077.733:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:57.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:57.740113 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 14:57:57.780555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 14:57:57.881722 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 14:57:57.891661 kernel: BTRFS info (device vda6): last unmount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 14:57:57.925294 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 14:57:57.945473 kernel: audit: type=1130 audit(1768921077.929:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:57.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:58.068306 ignition[1015]: INFO : Ignition 2.24.0 Jan 20 14:57:58.068306 ignition[1015]: INFO : Stage: mount Jan 20 14:57:58.075703 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 14:57:58.075703 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 14:57:58.075703 ignition[1015]: INFO : mount: mount passed Jan 20 14:57:58.075703 ignition[1015]: INFO : Ignition finished successfully Jan 20 14:57:58.118465 kernel: audit: type=1130 audit(1768921078.090:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:58.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:57:58.075753 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 14:57:58.092098 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 14:57:58.134643 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 14:57:58.167350 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Jan 20 14:57:58.176343 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 14:57:58.176368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 14:57:58.190461 kernel: BTRFS info (device vda6): turning on async discard Jan 20 14:57:58.190495 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 14:57:58.193030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 14:57:58.462297 ignition[1044]: INFO : Ignition 2.24.0 Jan 20 14:57:58.462297 ignition[1044]: INFO : Stage: files Jan 20 14:57:58.477383 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 14:57:58.483899 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 14:57:58.496048 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Jan 20 14:57:58.507925 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 14:57:58.507925 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 14:57:58.527383 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 14:57:58.534998 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 14:57:58.542656 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 14:57:58.535542 unknown[1044]: wrote ssh authorized keys file for user: core Jan 20 14:57:58.558400 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 14:57:58.558400 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 14:57:58.855450 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 14:57:59.145856 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 14:57:59.160374 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 14:57:59.160374 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 14:57:59.160374 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 14:57:59.160374 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 14:57:59.160374 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 14:57:59.224261 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 14:57:59.224261 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 14:57:59.224261 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 14:57:59.249416 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 14:57:59.249416 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 14:57:59.249416 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 14:57:59.249416 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 14:57:59.249416 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 14:57:59.249416 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 14:57:59.854010 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 14:58:01.536024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 14:58:01.536024 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 14:58:01.554713 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 14:58:01.565399 ignition[1044]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 14:58:01.620279 ignition[1044]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 14:58:01.640467 ignition[1044]: INFO : files: files passed Jan 20 14:58:01.710593 kernel: audit: type=1130 audit(1768921081.670:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.710656 ignition[1044]: INFO : Ignition finished successfully Jan 20 14:58:01.652485 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 14:58:01.674391 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 14:58:01.691377 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 14:58:01.753320 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 14:58:01.758607 initrd-setup-root-after-ignition[1076]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 14:58:01.782288 kernel: audit: type=1130 audit(1768921081.764:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.758550 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 14:58:01.798022 kernel: audit: type=1131 audit(1768921081.782:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.798094 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 14:58:01.798094 initrd-setup-root-after-ignition[1078]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 14:58:01.829807 kernel: audit: type=1130 audit(1768921081.803:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.782921 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 14:58:01.836635 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 14:58:01.836958 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 14:58:01.846151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 14:58:01.959729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 14:58:01.960119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 14:58:01.999962 kernel: audit: type=1130 audit(1768921081.969:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.000017 kernel: audit: type=1131 audit(1768921081.969:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:01.970326 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 14:58:02.008699 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 14:58:02.023598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 14:58:02.025011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 14:58:02.111809 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 14:58:02.139442 kernel: audit: type=1130 audit(1768921082.111:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.114421 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 14:58:02.183650 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 14:58:02.183944 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 14:58:02.198600 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 14:58:02.205094 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 14:58:02.218270 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 14:58:02.218445 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 14:58:02.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.234314 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 14:58:02.254356 kernel: audit: type=1131 audit(1768921082.228:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.245620 systemd[1]: Stopped target basic.target - Basic System. Jan 20 14:58:02.254602 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 14:58:02.273018 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 14:58:02.278937 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 14:58:02.289699 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 14:58:02.299665 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 14:58:02.308792 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 14:58:02.310020 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 14:58:02.327451 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 14:58:02.340295 systemd[1]: Stopped target swap.target - Swaps. Jan 20 14:58:02.344064 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 14:58:02.344324 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 14:58:02.372004 kernel: audit: type=1131 audit(1768921082.352:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.357111 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 14:58:02.371742 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 14:58:02.381473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 14:58:02.395062 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 14:58:02.396137 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 14:58:02.430560 kernel: audit: type=1131 audit(1768921082.406:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.396427 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 14:58:02.435325 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 14:58:02.435532 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 14:58:02.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.440344 systemd[1]: Stopped target paths.target - Path Units. Jan 20 14:58:02.449814 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 14:58:02.450533 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 14:58:02.457345 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 14:58:02.466672 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 14:58:02.476680 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 14:58:02.476878 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 14:58:02.484650 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 14:58:02.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.484786 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 14:58:02.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.492666 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 14:58:02.492800 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 14:58:02.500442 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 14:58:02.500558 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 14:58:02.509036 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 14:58:02.509309 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 14:58:02.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.540734 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 14:58:02.550893 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 14:58:02.551355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 14:58:02.561533 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 14:58:02.592950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 14:58:02.593435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 14:58:02.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.660444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 14:58:02.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.660907 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 14:58:02.673500 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 14:58:02.673790 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 14:58:02.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.714658 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 14:58:02.714903 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 14:58:02.743542 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 14:58:02.789632 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 14:58:02.789917 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 14:58:02.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.807040 ignition[1102]: INFO : Ignition 2.24.0 Jan 20 14:58:02.807040 ignition[1102]: INFO : Stage: umount Jan 20 14:58:02.823878 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 14:58:02.833400 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 14:58:02.840035 ignition[1102]: INFO : umount: umount passed Jan 20 14:58:02.840035 ignition[1102]: INFO : Ignition finished successfully Jan 20 14:58:02.839544 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 14:58:02.839805 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 14:58:02.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.870702 systemd[1]: Stopped target network.target - Network. Jan 20 14:58:02.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.878723 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 14:58:02.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.879030 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 14:58:02.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.880331 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 14:58:02.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.880402 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 14:58:02.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.894406 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 14:58:02.894492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 14:58:02.906758 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 14:58:02.906907 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 14:58:02.916064 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 14:58:02.916334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 14:58:02.927722 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 14:58:02.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:02.938786 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 14:58:02.972554 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 14:58:02.972939 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 14:58:03.007000 audit: BPF prog-id=9 op=UNLOAD Jan 20 14:58:03.000556 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 14:58:03.015530 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 14:58:03.015643 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 14:58:03.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.017786 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 14:58:03.027050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 14:58:03.027327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 14:58:03.039087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 14:58:03.087636 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 14:58:03.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.087904 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 14:58:03.128526 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 14:58:03.128697 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 14:58:03.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.158000 audit: BPF prog-id=6 op=UNLOAD Jan 20 14:58:03.161501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 14:58:03.161591 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 14:58:03.174300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 14:58:03.174381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 14:58:03.197944 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 14:58:03.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.198334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 14:58:03.226261 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 14:58:03.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.226409 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 14:58:03.256812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 14:58:03.257714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 14:58:03.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.283589 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 14:58:03.289910 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 14:58:03.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.290045 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 14:58:03.294942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 14:58:03.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.295004 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 14:58:03.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.340089 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 14:58:03.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.340445 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 14:58:03.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.355316 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 14:58:03.355643 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 14:58:03.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.373693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 14:58:03.373813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:58:03.403472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 14:58:03.403663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 14:58:03.460494 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 14:58:03.460745 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 14:58:03.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:03.495586 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 14:58:03.504610 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 14:58:03.661421 systemd[1]: Switching root. Jan 20 14:58:03.771420 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Jan 20 14:58:03.771632 systemd-journald[318]: Journal stopped Jan 20 14:58:07.970307 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 14:58:07.970372 kernel: SELinux: policy capability open_perms=1 Jan 20 14:58:07.970391 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 14:58:07.970407 kernel: SELinux: policy capability always_check_network=0 Jan 20 14:58:07.970419 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 14:58:07.970440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 14:58:07.970452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 14:58:07.970464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 14:58:07.970476 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 14:58:07.970490 systemd[1]: Successfully loaded SELinux policy in 232.561ms. Jan 20 14:58:07.970521 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.481ms. Jan 20 14:58:07.970534 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 14:58:07.970547 systemd[1]: Detected virtualization kvm. Jan 20 14:58:07.970560 systemd[1]: Detected architecture x86-64. Jan 20 14:58:07.970577 systemd[1]: Detected first boot. Jan 20 14:58:07.970648 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 14:58:07.970663 zram_generator::config[1146]: No configuration found. Jan 20 14:58:07.970684 kernel: Guest personality initialized and is inactive Jan 20 14:58:07.970697 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 14:58:07.970709 kernel: Initialized host personality Jan 20 14:58:07.970721 kernel: NET: Registered PF_VSOCK protocol family Jan 20 14:58:07.970733 systemd[1]: Populated /etc with preset unit settings. Jan 20 14:58:07.970745 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 14:58:07.970757 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 14:58:07.970777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 14:58:07.970794 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 14:58:07.970807 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 14:58:07.970819 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 14:58:07.970893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 14:58:07.970908 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 14:58:07.970925 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 14:58:07.970938 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 14:58:07.970950 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 14:58:07.970964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 14:58:07.970988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 14:58:07.971001 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 14:58:07.971067 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 14:58:07.971081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 14:58:07.971096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 14:58:07.971109 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 14:58:07.971121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 14:58:07.971137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 14:58:07.971151 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 14:58:07.971163 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 14:58:07.973283 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 14:58:07.973300 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 14:58:07.973314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 14:58:07.973332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 14:58:07.973345 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 14:58:07.973358 systemd[1]: Reached target slices.target - Slice Units. Jan 20 14:58:07.973371 systemd[1]: Reached target swap.target - Swaps. Jan 20 14:58:07.973385 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 14:58:07.973397 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 14:58:07.973410 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 14:58:07.973426 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 14:58:07.973439 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 14:58:07.973452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 14:58:07.973517 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 14:58:07.973531 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 14:58:07.973544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 14:58:07.973556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 14:58:07.973569 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 14:58:07.973585 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 14:58:07.973598 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 14:58:07.973610 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 14:58:07.973623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:07.973638 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 14:58:07.973651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 14:58:07.973666 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 14:58:07.973679 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 14:58:07.973692 systemd[1]: Reached target machines.target - Containers. Jan 20 14:58:07.973704 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 14:58:07.973717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 14:58:07.973730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 14:58:07.973743 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 14:58:07.973758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 14:58:07.973771 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 14:58:07.973784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 14:58:07.973891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 14:58:07.973905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 14:58:07.973918 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 14:58:07.973931 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 14:58:07.973947 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 14:58:07.973960 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 14:58:07.973973 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 14:58:07.973986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 14:58:07.973999 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 14:58:07.974015 kernel: ACPI: bus type drm_connector registered Jan 20 14:58:07.974027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 14:58:07.974040 kernel: fuse: init (API version 7.41) Jan 20 14:58:07.974052 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 14:58:07.974065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 14:58:07.974078 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 14:58:07.974094 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 14:58:07.974107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:07.974120 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 14:58:07.974158 systemd-journald[1227]: Collecting audit messages is enabled. Jan 20 14:58:07.974277 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 20 14:58:07.974296 kernel: audit: type=1305 audit(1768921087.966:106): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 14:58:07.974358 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 14:58:07.974373 kernel: audit: type=1300 audit(1768921087.966:106): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc8743dcd0 a2=4000 a3=0 items=0 ppid=1 pid=1227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:58:07.974386 kernel: audit: type=1327 audit(1768921087.966:106): proctitle="/usr/lib/systemd/systemd-journald" Jan 20 14:58:07.974399 systemd-journald[1227]: Journal started Jan 20 14:58:07.974420 systemd-journald[1227]: Runtime Journal (/run/log/journal/4e74a44ad4524f1fa08c7ed2bddc6427) is 6M, max 48M, 42M free. Jan 20 14:58:05.800000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 14:58:06.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:06.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:06.437000 audit: BPF prog-id=14 op=UNLOAD Jan 20 14:58:06.437000 audit: BPF prog-id=13 op=UNLOAD Jan 20 14:58:06.441000 audit: BPF prog-id=15 op=LOAD Jan 20 14:58:06.442000 audit: BPF prog-id=16 op=LOAD Jan 20 14:58:06.442000 audit: BPF prog-id=17 op=LOAD Jan 20 14:58:07.966000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 14:58:07.966000 audit[1227]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc8743dcd0 a2=4000 a3=0 items=0 ppid=1 pid=1227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:58:07.966000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 14:58:05.433350 systemd[1]: Queued start job for default target multi-user.target. Jan 20 14:58:05.457541 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 14:58:05.459022 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 14:58:05.460486 systemd[1]: systemd-journald.service: Consumed 1.503s CPU time. Jan 20 14:58:08.021269 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 14:58:08.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.049428 kernel: audit: type=1130 audit(1768921088.025:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.028496 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 14:58:08.043712 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 14:58:08.049508 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 14:58:08.055486 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 14:58:08.060967 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 14:58:08.067525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 14:58:08.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.079359 kernel: audit: type=1130 audit(1768921088.066:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.085099 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 14:58:08.085529 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 14:58:08.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.097302 kernel: audit: type=1130 audit(1768921088.084:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.103686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 14:58:08.104360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 14:58:08.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.129488 kernel: audit: type=1130 audit(1768921088.103:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.129528 kernel: audit: type=1131 audit(1768921088.103:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.134281 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 14:58:08.134567 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 14:58:08.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.145355 kernel: audit: type=1130 audit(1768921088.133:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.145391 kernel: audit: type=1131 audit(1768921088.133:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.161646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 14:58:08.162143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 14:58:08.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.168757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 14:58:08.169362 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 14:58:08.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.175399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 14:58:08.175711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 14:58:08.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.182453 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 14:58:08.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.188916 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 14:58:08.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.197981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 14:58:08.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.207652 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 14:58:08.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.219551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 14:58:08.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.274063 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 14:58:08.281147 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 14:58:08.292716 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 14:58:08.307579 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 14:58:08.319435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 14:58:08.319474 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 14:58:08.332282 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 14:58:08.341317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 14:58:08.341458 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 14:58:08.345902 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 14:58:08.357799 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 14:58:08.366970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 14:58:08.369444 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 14:58:08.374669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 14:58:08.379622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 14:58:08.387962 systemd-journald[1227]: Time spent on flushing to /var/log/journal/4e74a44ad4524f1fa08c7ed2bddc6427 is 41.416ms for 1205 entries. Jan 20 14:58:08.387962 systemd-journald[1227]: System Journal (/var/log/journal/4e74a44ad4524f1fa08c7ed2bddc6427) is 8M, max 163.5M, 155.5M free. Jan 20 14:58:08.450589 systemd-journald[1227]: Received client request to flush runtime journal. Jan 20 14:58:08.396643 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 14:58:08.412440 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 14:58:08.420724 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 14:58:08.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.428924 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 14:58:08.443935 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 14:58:08.454346 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 14:58:08.461505 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 14:58:08.470654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 14:58:08.477316 kernel: loop1: detected capacity change from 0 to 171112 Jan 20 14:58:08.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.492360 kernel: loop1: p1 p2 p3 Jan 20 14:58:08.507546 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 14:58:08.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.534895 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 14:58:08.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.561705 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 14:58:08.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.574000 audit: BPF prog-id=18 op=LOAD Jan 20 14:58:08.575000 audit: BPF prog-id=19 op=LOAD Jan 20 14:58:08.576000 audit: BPF prog-id=20 op=LOAD Jan 20 14:58:08.577976 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 14:58:08.583349 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Jan 20 14:58:08.589000 audit: BPF prog-id=21 op=LOAD Jan 20 14:58:08.590758 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 14:58:08.598815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 14:58:08.619000 audit: BPF prog-id=22 op=LOAD Jan 20 14:58:08.622000 audit: BPF prog-id=23 op=LOAD Jan 20 14:58:08.626395 kernel: loop2: detected capacity change from 0 to 375256 Jan 20 14:58:08.635393 kernel: loop2: p1 p2 p3 Jan 20 14:58:08.634000 audit: BPF prog-id=24 op=LOAD Jan 20 14:58:08.641700 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 14:58:08.651000 audit: BPF prog-id=25 op=LOAD Jan 20 14:58:08.651000 audit: BPF prog-id=26 op=LOAD Jan 20 14:58:08.651000 audit: BPF prog-id=27 op=LOAD Jan 20 14:58:08.655017 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 14:58:08.698459 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Jan 20 14:58:08.731637 kernel: loop3: detected capacity change from 0 to 219144 Jan 20 14:58:08.757630 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 20 14:58:08.757648 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 20 14:58:08.759098 systemd-nsresourced[1288]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 14:58:08.763096 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 14:58:08.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.788473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 14:58:08.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.831347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 14:58:08.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.845290 kernel: loop4: detected capacity change from 0 to 171112 Jan 20 14:58:08.848307 kernel: loop4: p1 p2 p3 Jan 20 14:58:08.919311 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 14:58:08.919432 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 14:58:08.919483 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Jan 20 14:58:08.927970 kernel: device-mapper: ioctl: error adding target to table Jan 20 14:58:08.928138 (sd-merge)[1306]: device-mapper: reload ioctl on 8c7c96915202989b4a0dcbd1acd80ba2f75612a91a267e360f9baafdceea3d6f-verity (253:1) failed: Invalid argument Jan 20 14:58:08.953465 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 14:58:08.956543 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 14:58:08.968364 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 20 14:58:08.969939 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 14:58:08.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:08.996669 systemd-resolved[1286]: Positive Trust Anchors: Jan 20 14:58:08.996725 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 14:58:08.996730 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 14:58:08.996759 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 14:58:09.013615 systemd-resolved[1286]: Defaulting to hostname 'linux'. Jan 20 14:58:09.017109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 14:58:09.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:09.022954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 14:58:09.723039 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 14:58:09.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:09.730000 audit: BPF prog-id=8 op=UNLOAD Jan 20 14:58:09.730000 audit: BPF prog-id=7 op=UNLOAD Jan 20 14:58:09.731000 audit: BPF prog-id=28 op=LOAD Jan 20 14:58:09.731000 audit: BPF prog-id=29 op=LOAD Jan 20 14:58:09.733496 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 14:58:09.829368 systemd-udevd[1313]: Using default interface naming scheme 'v257'. Jan 20 14:58:09.871525 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 14:58:09.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:09.879000 audit: BPF prog-id=30 op=LOAD Jan 20 14:58:09.881330 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 14:58:09.948313 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 14:58:10.198390 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 14:58:10.213740 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 14:58:10.215293 systemd-networkd[1317]: lo: Link UP Jan 20 14:58:10.215305 systemd-networkd[1317]: lo: Gained carrier Jan 20 14:58:10.219373 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 14:58:10.220377 systemd-networkd[1317]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 14:58:10.220456 systemd-networkd[1317]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 14:58:10.222025 systemd-networkd[1317]: eth0: Link UP Jan 20 14:58:10.222908 systemd-networkd[1317]: eth0: Gained carrier Jan 20 14:58:10.222978 systemd-networkd[1317]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 14:58:10.235512 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 14:58:10.260055 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 14:58:10.273926 kernel: ACPI: button: Power Button [PWRF] Jan 20 14:58:10.273947 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 14:58:10.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:10.237797 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 14:58:10.253618 systemd[1]: Reached target network.target - Network. Jan 20 14:58:10.254317 systemd-networkd[1317]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 14:58:10.275622 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 14:58:10.292728 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 14:58:10.312460 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 14:58:10.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:10.366063 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 14:58:10.374419 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 14:58:10.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:10.424642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 14:58:10.566923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 14:58:10.567666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:58:10.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:10.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:10.585089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 14:58:10.745404 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Jan 20 14:58:10.760315 kernel: loop5: detected capacity change from 0 to 375256 Jan 20 14:58:10.767505 kernel: loop5: p1 p2 p3 Jan 20 14:58:10.843876 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 14:58:10.844029 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 14:58:10.855115 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Jan 20 14:58:10.855312 kernel: device-mapper: ioctl: error adding target to table Jan 20 14:58:10.858322 (sd-merge)[1306]: device-mapper: reload ioctl on 843577122f2bcae09e086c1955c04b6b28388e52152c2016187e408266e84aa6-verity (253:2) failed: Invalid argument Jan 20 14:58:10.863907 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 14:58:10.866768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 14:58:10.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:10.938164 kernel: kvm_amd: TSC scaling supported Jan 20 14:58:10.938336 kernel: kvm_amd: Nested Virtualization enabled Jan 20 14:58:10.938358 kernel: kvm_amd: Nested Paging enabled Jan 20 14:58:10.944465 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 14:58:10.944521 kernel: kvm_amd: PMU virtualization is disabled Jan 20 14:58:10.993394 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Jan 20 14:58:11.007395 kernel: loop6: detected capacity change from 0 to 219144 Jan 20 14:58:11.059037 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 14:58:11.068307 (sd-merge)[1306]: Merged extensions into '/usr'. Jan 20 14:58:11.074780 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 14:58:11.074992 systemd[1]: Reloading... Jan 20 14:58:11.091323 kernel: EDAC MC: Ver: 3.0.0 Jan 20 14:58:11.292326 zram_generator::config[1417]: No configuration found. Jan 20 14:58:11.688700 systemd[1]: Reloading finished in 611 ms. Jan 20 14:58:11.734745 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 14:58:11.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:11.769342 systemd[1]: Starting ensure-sysext.service... Jan 20 14:58:11.778813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 14:58:11.805000 audit: BPF prog-id=31 op=LOAD Jan 20 14:58:11.805000 audit: BPF prog-id=32 op=LOAD Jan 20 14:58:11.805000 audit: BPF prog-id=28 op=UNLOAD Jan 20 14:58:11.805000 audit: BPF prog-id=29 op=UNLOAD Jan 20 14:58:11.807000 audit: BPF prog-id=33 op=LOAD Jan 20 14:58:11.807000 audit: BPF prog-id=21 op=UNLOAD Jan 20 14:58:11.810000 audit: BPF prog-id=34 op=LOAD Jan 20 14:58:11.810000 audit: BPF prog-id=18 op=UNLOAD Jan 20 14:58:11.810000 audit: BPF prog-id=35 op=LOAD Jan 20 14:58:11.810000 audit: BPF prog-id=36 op=LOAD Jan 20 14:58:11.810000 audit: BPF prog-id=19 op=UNLOAD Jan 20 14:58:11.810000 audit: BPF prog-id=20 op=UNLOAD Jan 20 14:58:11.817000 audit: BPF prog-id=37 op=LOAD Jan 20 14:58:11.818000 audit: BPF prog-id=25 op=UNLOAD Jan 20 14:58:11.818000 audit: BPF prog-id=38 op=LOAD Jan 20 14:58:11.818000 audit: BPF prog-id=39 op=LOAD Jan 20 14:58:11.818000 audit: BPF prog-id=26 op=UNLOAD Jan 20 14:58:11.818000 audit: BPF prog-id=27 op=UNLOAD Jan 20 14:58:11.820000 audit: BPF prog-id=40 op=LOAD Jan 20 14:58:11.820000 audit: BPF prog-id=15 op=UNLOAD Jan 20 14:58:11.820000 audit: BPF prog-id=41 op=LOAD Jan 20 14:58:11.820000 audit: BPF prog-id=42 op=LOAD Jan 20 14:58:11.821000 audit: BPF prog-id=16 op=UNLOAD Jan 20 14:58:11.821000 audit: BPF prog-id=17 op=UNLOAD Jan 20 14:58:11.822000 audit: BPF prog-id=43 op=LOAD Jan 20 14:58:11.822000 audit: BPF prog-id=30 op=UNLOAD Jan 20 14:58:11.825000 audit: BPF prog-id=44 op=LOAD Jan 20 14:58:11.825000 audit: BPF prog-id=22 op=UNLOAD Jan 20 14:58:11.825000 audit: BPF prog-id=45 op=LOAD Jan 20 14:58:11.825000 audit: BPF prog-id=46 op=LOAD Jan 20 14:58:11.826000 audit: BPF prog-id=23 op=UNLOAD Jan 20 14:58:11.826000 audit: BPF prog-id=24 op=UNLOAD Jan 20 14:58:11.834709 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 14:58:11.834823 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 14:58:11.835630 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 14:58:11.836765 systemd[1]: Reload requested from client PID 1450 ('systemctl') (unit ensure-sysext.service)... Jan 20 14:58:11.836917 systemd[1]: Reloading... Jan 20 14:58:11.838538 systemd-tmpfiles[1451]: ACLs are not supported, ignoring. Jan 20 14:58:11.838736 systemd-tmpfiles[1451]: ACLs are not supported, ignoring. Jan 20 14:58:11.852762 systemd-tmpfiles[1451]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 14:58:11.852982 systemd-tmpfiles[1451]: Skipping /boot Jan 20 14:58:11.869696 systemd-networkd[1317]: eth0: Gained IPv6LL Jan 20 14:58:11.884599 systemd-tmpfiles[1451]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 14:58:11.884683 systemd-tmpfiles[1451]: Skipping /boot Jan 20 14:58:11.957339 zram_generator::config[1485]: No configuration found. Jan 20 14:58:12.266378 systemd[1]: Reloading finished in 428 ms. Jan 20 14:58:12.292053 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 14:58:12.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.302000 audit: BPF prog-id=47 op=LOAD Jan 20 14:58:12.302000 audit: BPF prog-id=33 op=UNLOAD Jan 20 14:58:12.305000 audit: BPF prog-id=48 op=LOAD Jan 20 14:58:12.305000 audit: BPF prog-id=40 op=UNLOAD Jan 20 14:58:12.305000 audit: BPF prog-id=49 op=LOAD Jan 20 14:58:12.305000 audit: BPF prog-id=50 op=LOAD Jan 20 14:58:12.305000 audit: BPF prog-id=41 op=UNLOAD Jan 20 14:58:12.305000 audit: BPF prog-id=42 op=UNLOAD Jan 20 14:58:12.308000 audit: BPF prog-id=51 op=LOAD Jan 20 14:58:12.324000 audit: BPF prog-id=43 op=UNLOAD Jan 20 14:58:12.325000 audit: BPF prog-id=52 op=LOAD Jan 20 14:58:12.325000 audit: BPF prog-id=44 op=UNLOAD Jan 20 14:58:12.325000 audit: BPF prog-id=53 op=LOAD Jan 20 14:58:12.325000 audit: BPF prog-id=54 op=LOAD Jan 20 14:58:12.325000 audit: BPF prog-id=45 op=UNLOAD Jan 20 14:58:12.325000 audit: BPF prog-id=46 op=UNLOAD Jan 20 14:58:12.326000 audit: BPF prog-id=55 op=LOAD Jan 20 14:58:12.326000 audit: BPF prog-id=56 op=LOAD Jan 20 14:58:12.326000 audit: BPF prog-id=31 op=UNLOAD Jan 20 14:58:12.326000 audit: BPF prog-id=32 op=UNLOAD Jan 20 14:58:12.327000 audit: BPF prog-id=57 op=LOAD Jan 20 14:58:12.328000 audit: BPF prog-id=37 op=UNLOAD Jan 20 14:58:12.328000 audit: BPF prog-id=58 op=LOAD Jan 20 14:58:12.328000 audit: BPF prog-id=59 op=LOAD Jan 20 14:58:12.328000 audit: BPF prog-id=38 op=UNLOAD Jan 20 14:58:12.328000 audit: BPF prog-id=39 op=UNLOAD Jan 20 14:58:12.329000 audit: BPF prog-id=60 op=LOAD Jan 20 14:58:12.329000 audit: BPF prog-id=34 op=UNLOAD Jan 20 14:58:12.329000 audit: BPF prog-id=61 op=LOAD Jan 20 14:58:12.329000 audit: BPF prog-id=62 op=LOAD Jan 20 14:58:12.329000 audit: BPF prog-id=35 op=UNLOAD Jan 20 14:58:12.329000 audit: BPF prog-id=36 op=UNLOAD Jan 20 14:58:12.334463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 14:58:12.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.356042 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 14:58:12.363434 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 14:58:12.369582 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 14:58:12.384370 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 14:58:12.393490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 14:58:12.403126 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 14:58:12.413516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:12.413746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 14:58:12.415594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 14:58:12.427740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 14:58:12.436737 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 14:58:12.445774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 14:58:12.446059 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 14:58:12.446151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 14:58:12.446331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:12.450000 audit[1534]: SYSTEM_BOOT pid=1534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.450796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 14:58:12.451513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 14:58:12.463426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 14:58:12.463722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 14:58:12.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.473674 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 14:58:12.474420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 14:58:12.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 14:58:12.490006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 14:58:12.489000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 14:58:12.489000 audit[1553]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdc02e940 a2=420 a3=0 items=0 ppid=1524 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:58:12.489000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 14:58:12.490387 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 14:58:12.490647 augenrules[1553]: No rules Jan 20 14:58:12.493622 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 14:58:12.504469 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 14:58:12.504924 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 14:58:12.511956 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 14:58:12.524369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:12.524552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 14:58:12.526569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 14:58:12.534736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 14:58:12.548944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 14:58:12.558020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 14:58:12.558392 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 14:58:12.558583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 14:58:12.558753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:12.562917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 14:58:12.563380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 14:58:12.570489 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 14:58:12.571165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 14:58:12.571537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 14:58:12.591351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:12.593970 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 14:58:12.599341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 14:58:12.604284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 14:58:12.614610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 14:58:12.624953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 14:58:12.631009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 14:58:12.631355 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 14:58:12.631456 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 14:58:12.631569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 14:58:12.631637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 14:58:12.633739 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 14:58:12.635386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 14:58:12.641648 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 14:58:12.642024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 14:58:12.649733 augenrules[1568]: /sbin/augenrules: No change Jan 20 14:58:12.651767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 14:58:12.652526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 14:58:12.663000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 14:58:12.663000 audit[1588]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff012deb80 a2=420 a3=0 items=0 ppid=1568 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:58:12.663000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 14:58:12.665137 systemd[1]: Finished ensure-sysext.service. Jan 20 14:58:12.665000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 14:58:12.665000 audit[1588]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff012e1010 a2=420 a3=0 items=0 ppid=1568 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 14:58:12.665000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 14:58:12.665905 augenrules[1588]: No rules Jan 20 14:58:12.670121 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 14:58:12.671333 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 14:58:12.677097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 14:58:12.677610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 14:58:12.689934 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 14:58:12.690019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 14:58:12.693282 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 14:58:12.944398 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 14:58:14.180099 systemd-timesyncd[1600]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 14:58:14.180360 systemd-timesyncd[1600]: Initial clock synchronization to Tue 2026-01-20 14:58:14.179965 UTC. Jan 20 14:58:14.181213 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 14:58:14.181622 systemd-resolved[1286]: Clock change detected. Flushing caches. Jan 20 14:58:15.026502 ldconfig[1526]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 14:58:15.036607 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 14:58:15.048374 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 14:58:15.136872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 14:58:15.144379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 14:58:15.150401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 14:58:15.157439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 14:58:15.163940 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 14:58:15.170205 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 14:58:15.176531 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 14:58:15.183239 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 14:58:15.189841 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 14:58:15.195610 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 14:58:15.202226 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 14:58:15.202304 systemd[1]: Reached target paths.target - Path Units. Jan 20 14:58:15.207240 systemd[1]: Reached target timers.target - Timer Units. Jan 20 14:58:15.215213 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 14:58:15.222891 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 14:58:15.241046 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 14:58:15.247116 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 14:58:15.252866 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 14:58:15.285395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 14:58:15.291847 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 14:58:15.299019 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 14:58:15.305842 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 14:58:15.310953 systemd[1]: Reached target basic.target - Basic System. Jan 20 14:58:15.315274 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 14:58:15.315395 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 14:58:15.317412 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 14:58:15.323775 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 14:58:15.332929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 14:58:15.347808 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 14:58:15.357854 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 14:58:15.380246 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 14:58:15.385857 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 14:58:15.388766 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 14:58:15.392122 jq[1613]: false Jan 20 14:58:15.396234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:58:15.405922 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 14:58:15.412966 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 14:58:15.419285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 14:58:15.428917 extend-filesystems[1614]: Found /dev/vda6 Jan 20 14:58:15.438787 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Refreshing passwd entry cache Jan 20 14:58:15.419466 oslogin_cache_refresh[1615]: Refreshing passwd entry cache Jan 20 14:58:15.426833 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 14:58:15.440141 extend-filesystems[1614]: Found /dev/vda9 Jan 20 14:58:15.444600 extend-filesystems[1614]: Checking size of /dev/vda9 Jan 20 14:58:15.449088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 14:58:15.452281 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Failure getting users, quitting Jan 20 14:58:15.452281 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 14:58:15.452281 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Refreshing group entry cache Jan 20 14:58:15.450809 oslogin_cache_refresh[1615]: Failure getting users, quitting Jan 20 14:58:15.450904 oslogin_cache_refresh[1615]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 14:58:15.450959 oslogin_cache_refresh[1615]: Refreshing group entry cache Jan 20 14:58:15.471836 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 14:58:15.477048 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 14:58:15.477950 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 14:58:15.479070 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 14:58:15.479916 extend-filesystems[1614]: Resized partition /dev/vda9 Jan 20 14:58:15.497866 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 14:58:15.496854 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 14:58:15.498061 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Failure getting groups, quitting Jan 20 14:58:15.498061 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 14:58:15.498121 extend-filesystems[1634]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 14:58:15.489800 oslogin_cache_refresh[1615]: Failure getting groups, quitting Jan 20 14:58:15.489817 oslogin_cache_refresh[1615]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 14:58:15.523874 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 14:58:15.532219 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 14:58:15.535911 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 14:58:15.536308 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 14:58:15.536762 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 14:58:15.547873 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 14:58:15.551595 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 14:58:15.558058 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 14:58:15.558590 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 14:58:15.583177 jq[1637]: true Jan 20 14:58:15.589551 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 14:58:15.600806 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 14:58:15.626517 update_engine[1633]: I20260120 14:58:15.600919 1633 main.cc:92] Flatcar Update Engine starting Jan 20 14:58:15.626182 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 14:58:15.627606 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 14:58:15.631633 extend-filesystems[1634]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 14:58:15.631633 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 14:58:15.631633 extend-filesystems[1634]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 14:58:15.655955 extend-filesystems[1614]: Resized filesystem in /dev/vda9 Jan 20 14:58:15.637953 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 14:58:15.660990 jq[1669]: true Jan 20 14:58:15.638372 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 14:58:15.685507 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 14:58:15.692359 tar[1649]: linux-amd64/LICENSE Jan 20 14:58:15.701149 tar[1649]: linux-amd64/helm Jan 20 14:58:15.725082 systemd-logind[1631]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 14:58:15.725115 systemd-logind[1631]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 14:58:15.728197 systemd-logind[1631]: New seat seat0. Jan 20 14:58:15.748095 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 14:58:15.791841 dbus-daemon[1611]: [system] SELinux support is enabled Jan 20 14:58:15.792397 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 14:58:15.802909 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 14:58:15.802994 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 14:58:15.809774 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 14:58:15.809847 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 14:58:15.934939 dbus-daemon[1611]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 14:58:15.938306 systemd[1]: Started update-engine.service - Update Engine. Jan 20 14:58:15.946867 update_engine[1633]: I20260120 14:58:15.946307 1633 update_check_scheduler.cc:74] Next update check in 7m4s Jan 20 14:58:15.947996 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 14:58:16.029830 bash[1708]: Updated "/home/core/.ssh/authorized_keys" Jan 20 14:58:16.017952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 14:58:16.034274 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 14:58:16.617882 locksmithd[1707]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 14:58:16.689201 sshd_keygen[1653]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 14:58:16.859008 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 14:58:16.877035 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 14:58:16.938539 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 14:58:16.939115 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 14:58:16.951304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 14:58:16.993223 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 14:58:17.004099 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 14:58:17.011495 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 14:58:17.127591 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 14:58:17.772284 containerd[1676]: time="2026-01-20T14:58:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 14:58:17.779901 containerd[1676]: time="2026-01-20T14:58:17.779739397Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 14:58:17.859812 tar[1649]: linux-amd64/README.md Jan 20 14:58:17.877953 containerd[1676]: time="2026-01-20T14:58:17.877814976Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.695µs" Jan 20 14:58:17.877953 containerd[1676]: time="2026-01-20T14:58:17.877927055Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 14:58:17.878184 containerd[1676]: time="2026-01-20T14:58:17.877998879Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 14:58:17.878184 containerd[1676]: time="2026-01-20T14:58:17.878022403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 14:58:17.879428 containerd[1676]: time="2026-01-20T14:58:17.879263180Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 14:58:17.879594 containerd[1676]: time="2026-01-20T14:58:17.879508638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 14:58:17.879977 containerd[1676]: time="2026-01-20T14:58:17.879886914Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 14:58:17.879977 containerd[1676]: time="2026-01-20T14:58:17.879972674Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 14:58:17.881928 containerd[1676]: time="2026-01-20T14:58:17.881850922Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 14:58:17.881928 containerd[1676]: time="2026-01-20T14:58:17.881913178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 14:58:17.881928 containerd[1676]: time="2026-01-20T14:58:17.881926914Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 14:58:17.882002 containerd[1676]: time="2026-01-20T14:58:17.881935280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 14:58:17.882852 containerd[1676]: time="2026-01-20T14:58:17.882797860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 14:58:17.883126 containerd[1676]: time="2026-01-20T14:58:17.883047607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 14:58:17.883848 containerd[1676]: time="2026-01-20T14:58:17.883603144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 14:58:17.886540 containerd[1676]: time="2026-01-20T14:58:17.886304223Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 14:58:17.886540 containerd[1676]: time="2026-01-20T14:58:17.886519042Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 14:58:17.889814 containerd[1676]: time="2026-01-20T14:58:17.886826566Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 14:58:17.892101 containerd[1676]: time="2026-01-20T14:58:17.892015666Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 14:58:17.892398 containerd[1676]: time="2026-01-20T14:58:17.892247088Z" level=info msg="metadata content store policy set" policy=shared Jan 20 14:58:17.904271 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908391228Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908551427Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908737975Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908755098Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908771769Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908784112Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908795423Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908804510Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908815921Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908827844Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908842401Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908852930Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.908863019Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 14:58:17.909730 containerd[1676]: time="2026-01-20T14:58:17.909053004Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909238300Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909272444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909288123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909297841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909308751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909453001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909469442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909487075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909498546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909508524Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 14:58:17.910197 containerd[1676]: time="2026-01-20T14:58:17.909519165Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 14:58:17.911022 containerd[1676]: time="2026-01-20T14:58:17.910990181Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 14:58:17.911493 containerd[1676]: time="2026-01-20T14:58:17.911419382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 14:58:17.911586 containerd[1676]: time="2026-01-20T14:58:17.911568220Z" level=info msg="Start snapshots syncer" Jan 20 14:58:17.914886 containerd[1676]: time="2026-01-20T14:58:17.914638608Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 14:58:17.915848 containerd[1676]: time="2026-01-20T14:58:17.915485139Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 14:58:17.915848 containerd[1676]: time="2026-01-20T14:58:17.915773227Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 14:58:17.917900 containerd[1676]: time="2026-01-20T14:58:17.917855424Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 14:58:17.918081 containerd[1676]: time="2026-01-20T14:58:17.918004954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 14:58:17.918116 containerd[1676]: time="2026-01-20T14:58:17.918084452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 14:58:17.918116 containerd[1676]: time="2026-01-20T14:58:17.918098699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 14:58:17.918116 containerd[1676]: time="2026-01-20T14:58:17.918110441Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 14:58:17.918170 containerd[1676]: time="2026-01-20T14:58:17.918123325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 14:58:17.918170 containerd[1676]: time="2026-01-20T14:58:17.918135007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 14:58:17.918170 containerd[1676]: time="2026-01-20T14:58:17.918153271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 14:58:17.918170 containerd[1676]: time="2026-01-20T14:58:17.918165924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 14:58:17.918249 containerd[1676]: time="2026-01-20T14:58:17.918176935Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 14:58:17.918269 containerd[1676]: time="2026-01-20T14:58:17.918256243Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 14:58:17.918289 containerd[1676]: time="2026-01-20T14:58:17.918273055Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 14:58:17.918289 containerd[1676]: time="2026-01-20T14:58:17.918282192Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 14:58:17.918386 containerd[1676]: time="2026-01-20T14:58:17.918292611Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 14:58:17.918386 containerd[1676]: time="2026-01-20T14:58:17.918302600Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918513243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918603923Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918630011Z" level=info msg="runtime interface created" Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918636353Z" level=info msg="created NRI interface" Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918736210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918751518Z" level=info msg="Connect containerd service" Jan 20 14:58:17.919773 containerd[1676]: time="2026-01-20T14:58:17.918773810Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 14:58:17.929967 containerd[1676]: time="2026-01-20T14:58:17.929812132Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 14:58:18.720455 containerd[1676]: time="2026-01-20T14:58:18.720191520Z" level=info msg="Start subscribing containerd event" Jan 20 14:58:18.721537 containerd[1676]: time="2026-01-20T14:58:18.720310462Z" level=info msg="Start recovering state" Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.724973941Z" level=info msg="Start event monitor" Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.725104886Z" level=info msg="Start cni network conf syncer for default" Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.725117128Z" level=info msg="Start streaming server" Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.725126887Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.725179515Z" level=info msg="runtime interface starting up..." Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.725185967Z" level=info msg="starting plugins..." Jan 20 14:58:18.725290 containerd[1676]: time="2026-01-20T14:58:18.725204712Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 14:58:18.726535 containerd[1676]: time="2026-01-20T14:58:18.726429662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 14:58:18.726812 containerd[1676]: time="2026-01-20T14:58:18.726620758Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 14:58:18.727269 containerd[1676]: time="2026-01-20T14:58:18.727196363Z" level=info msg="containerd successfully booted in 0.957194s" Jan 20 14:58:18.728225 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 14:58:20.875513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:58:20.882838 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 14:58:20.888412 systemd[1]: Startup finished in 8.203s (kernel) + 16.171s (initrd) + 15.510s (userspace) = 39.884s. Jan 20 14:58:20.913097 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 14:58:21.549759 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 14:58:21.552043 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:58194.service - OpenSSH per-connection server daemon (10.0.0.1:58194). Jan 20 14:58:22.025311 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 58194 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 14:58:22.029615 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:58:22.049568 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 14:58:22.051405 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 14:58:22.058886 systemd-logind[1631]: New session 1 of user core. Jan 20 14:58:22.215230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 14:58:22.221299 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 14:58:22.293473 (systemd)[1780]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:58:22.300202 systemd-logind[1631]: New session 2 of user core. Jan 20 14:58:22.522392 kubelet[1762]: E0120 14:58:22.522063 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 14:58:22.526960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 14:58:22.527409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 14:58:22.528779 systemd[1]: kubelet.service: Consumed 4.740s CPU time, 257.2M memory peak. Jan 20 14:58:22.637949 systemd[1780]: Queued start job for default target default.target. Jan 20 14:58:22.649741 systemd[1780]: Created slice app.slice - User Application Slice. Jan 20 14:58:22.649824 systemd[1780]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 14:58:22.649847 systemd[1780]: Reached target paths.target - Paths. Jan 20 14:58:22.649991 systemd[1780]: Reached target timers.target - Timers. Jan 20 14:58:22.652994 systemd[1780]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 14:58:22.654804 systemd[1780]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 14:58:22.813586 systemd[1780]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 14:58:22.813934 systemd[1780]: Reached target sockets.target - Sockets. Jan 20 14:58:22.819033 systemd[1780]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 14:58:22.819281 systemd[1780]: Reached target basic.target - Basic System. Jan 20 14:58:22.819494 systemd[1780]: Reached target default.target - Main User Target. Jan 20 14:58:22.819540 systemd[1780]: Startup finished in 490ms. Jan 20 14:58:22.820510 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 14:58:22.842190 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 14:58:22.877625 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:58342.service - OpenSSH per-connection server daemon (10.0.0.1:58342). Jan 20 14:58:22.965086 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 14:58:22.967503 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:58:22.975733 systemd-logind[1631]: New session 3 of user core. Jan 20 14:58:22.990178 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 14:58:23.014258 sshd[1799]: Connection closed by 10.0.0.1 port 58342 Jan 20 14:58:23.014874 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 20 14:58:23.028208 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:58342.service: Deactivated successfully. Jan 20 14:58:23.037529 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 14:58:23.039266 systemd-logind[1631]: Session 3 logged out. Waiting for processes to exit. Jan 20 14:58:23.044863 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:58352.service - OpenSSH per-connection server daemon (10.0.0.1:58352). Jan 20 14:58:23.046060 systemd-logind[1631]: Removed session 3. Jan 20 14:58:23.157198 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 58352 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 14:58:23.159283 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:58:23.167178 systemd-logind[1631]: New session 4 of user core. Jan 20 14:58:23.175929 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 14:58:23.192959 sshd[1809]: Connection closed by 10.0.0.1 port 58352 Jan 20 14:58:23.193566 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 20 14:58:23.203959 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:58352.service: Deactivated successfully. Jan 20 14:58:23.207098 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 14:58:23.208520 systemd-logind[1631]: Session 4 logged out. Waiting for processes to exit. Jan 20 14:58:23.212966 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:58360.service - OpenSSH per-connection server daemon (10.0.0.1:58360). Jan 20 14:58:23.213877 systemd-logind[1631]: Removed session 4. Jan 20 14:58:23.303016 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 58360 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 14:58:23.305993 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:58:23.314955 systemd-logind[1631]: New session 5 of user core. Jan 20 14:58:23.335160 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 14:58:23.367638 sshd[1819]: Connection closed by 10.0.0.1 port 58360 Jan 20 14:58:23.368515 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 20 14:58:23.382178 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:58360.service: Deactivated successfully. Jan 20 14:58:23.416779 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 14:58:23.420003 systemd-logind[1631]: Session 5 logged out. Waiting for processes to exit. Jan 20 14:58:23.432593 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:58362.service - OpenSSH per-connection server daemon (10.0.0.1:58362). Jan 20 14:58:23.434605 systemd-logind[1631]: Removed session 5. Jan 20 14:58:23.674556 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 58362 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 14:58:23.677315 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 14:58:23.688045 systemd-logind[1631]: New session 6 of user core. Jan 20 14:58:23.701969 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 14:58:23.749226 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 14:58:23.749951 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 14:58:26.928258 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 14:58:27.023944 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 14:58:28.478472 dockerd[1851]: time="2026-01-20T14:58:28.477990589Z" level=info msg="Starting up" Jan 20 14:58:28.488901 dockerd[1851]: time="2026-01-20T14:58:28.488798619Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 14:58:28.533002 dockerd[1851]: time="2026-01-20T14:58:28.532789976Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 14:58:28.595487 systemd[1]: var-lib-docker-metacopy\x2dcheck3642456454-merged.mount: Deactivated successfully. Jan 20 14:58:28.638783 dockerd[1851]: time="2026-01-20T14:58:28.638472896Z" level=info msg="Loading containers: start." Jan 20 14:58:28.666756 kernel: Initializing XFRM netlink socket Jan 20 14:58:32.570046 systemd-networkd[1317]: docker0: Link UP Jan 20 14:58:32.579715 dockerd[1851]: time="2026-01-20T14:58:32.579523882Z" level=info msg="Loading containers: done." Jan 20 14:58:32.649879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 14:58:32.655539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:58:32.683495 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck732816172-merged.mount: Deactivated successfully. Jan 20 14:58:32.694897 dockerd[1851]: time="2026-01-20T14:58:32.694564096Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 14:58:32.694897 dockerd[1851]: time="2026-01-20T14:58:32.694859468Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 14:58:32.695095 dockerd[1851]: time="2026-01-20T14:58:32.695062307Z" level=info msg="Initializing buildkit" Jan 20 14:58:32.791866 dockerd[1851]: time="2026-01-20T14:58:32.791755085Z" level=info msg="Completed buildkit initialization" Jan 20 14:58:32.819284 dockerd[1851]: time="2026-01-20T14:58:32.818772104Z" level=info msg="Daemon has completed initialization" Jan 20 14:58:32.819284 dockerd[1851]: time="2026-01-20T14:58:32.818926842Z" level=info msg="API listen on /run/docker.sock" Jan 20 14:58:32.821523 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 14:58:33.866958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:58:33.889189 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 14:58:34.773458 kubelet[2079]: E0120 14:58:34.772851 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 14:58:34.782495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 14:58:34.782856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 14:58:34.783805 systemd[1]: kubelet.service: Consumed 1.832s CPU time, 111M memory peak. Jan 20 14:58:35.483568 containerd[1676]: time="2026-01-20T14:58:35.482797946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 14:58:36.967573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960571911.mount: Deactivated successfully. Jan 20 14:58:39.976513 containerd[1676]: time="2026-01-20T14:58:39.976227425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:39.978795 containerd[1676]: time="2026-01-20T14:58:39.976328844Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26193410" Jan 20 14:58:39.981456 containerd[1676]: time="2026-01-20T14:58:39.981207775Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:39.988604 containerd[1676]: time="2026-01-20T14:58:39.988478271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:39.990993 containerd[1676]: time="2026-01-20T14:58:39.990863308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 4.507631783s" Jan 20 14:58:39.991116 containerd[1676]: time="2026-01-20T14:58:39.991083830Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 14:58:39.996911 containerd[1676]: time="2026-01-20T14:58:39.996884199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 14:58:42.754283 containerd[1676]: time="2026-01-20T14:58:42.753780752Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 20 14:58:42.756088 containerd[1676]: time="2026-01-20T14:58:42.754786589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:42.758831 containerd[1676]: time="2026-01-20T14:58:42.758567960Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:42.765277 containerd[1676]: time="2026-01-20T14:58:42.765079770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:42.767506 containerd[1676]: time="2026-01-20T14:58:42.767235165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.770159189s" Jan 20 14:58:42.767506 containerd[1676]: time="2026-01-20T14:58:42.767470304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 14:58:42.771663 containerd[1676]: time="2026-01-20T14:58:42.771599719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 14:58:44.418046 containerd[1676]: time="2026-01-20T14:58:44.417463598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:44.420154 containerd[1676]: time="2026-01-20T14:58:44.420105518Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 20 14:58:44.423311 containerd[1676]: time="2026-01-20T14:58:44.422971653Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:44.432805 containerd[1676]: time="2026-01-20T14:58:44.432555846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:44.439334 containerd[1676]: time="2026-01-20T14:58:44.436721221Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.665001327s" Jan 20 14:58:44.439334 containerd[1676]: time="2026-01-20T14:58:44.436830606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 14:58:44.440225 containerd[1676]: time="2026-01-20T14:58:44.439848403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 14:58:44.899481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 14:58:44.904033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:58:45.338273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:58:45.369210 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 14:58:45.580937 kubelet[2170]: E0120 14:58:45.580809 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 14:58:45.585933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 14:58:45.586146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 14:58:45.587092 systemd[1]: kubelet.service: Consumed 590ms CPU time, 110.2M memory peak. Jan 20 14:58:46.218590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421967893.mount: Deactivated successfully. Jan 20 14:58:51.302388 containerd[1676]: time="2026-01-20T14:58:51.301269951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:51.309455 containerd[1676]: time="2026-01-20T14:58:51.303320964Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Jan 20 14:58:51.309455 containerd[1676]: time="2026-01-20T14:58:51.308224167Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:51.317203 containerd[1676]: time="2026-01-20T14:58:51.316615809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:51.319333 containerd[1676]: time="2026-01-20T14:58:51.319128081Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 6.879098451s" Jan 20 14:58:51.320563 containerd[1676]: time="2026-01-20T14:58:51.320171039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 14:58:51.347540 containerd[1676]: time="2026-01-20T14:58:51.347355714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 14:58:52.821182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038305333.mount: Deactivated successfully. Jan 20 14:58:55.659155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 14:58:55.664498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:58:55.789367 containerd[1676]: time="2026-01-20T14:58:55.789066382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:55.792435 containerd[1676]: time="2026-01-20T14:58:55.792217267Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22039735" Jan 20 14:58:55.794072 containerd[1676]: time="2026-01-20T14:58:55.794033960Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:55.798978 containerd[1676]: time="2026-01-20T14:58:55.798875049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:55.800614 containerd[1676]: time="2026-01-20T14:58:55.800494532Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.452863108s" Jan 20 14:58:55.800614 containerd[1676]: time="2026-01-20T14:58:55.800586618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 14:58:55.804578 containerd[1676]: time="2026-01-20T14:58:55.804459928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 14:58:56.266258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:58:56.283333 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 14:58:56.490419 kubelet[2242]: E0120 14:58:56.489395 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 14:58:56.931450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28018227.mount: Deactivated successfully. Jan 20 14:58:56.933862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 14:58:56.934291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 14:58:56.936077 systemd[1]: kubelet.service: Consumed 709ms CPU time, 110.2M memory peak. Jan 20 14:58:56.952874 containerd[1676]: time="2026-01-20T14:58:56.952819035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:56.954619 containerd[1676]: time="2026-01-20T14:58:56.954385244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 20 14:58:56.956241 containerd[1676]: time="2026-01-20T14:58:56.956178819Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:56.960682 containerd[1676]: time="2026-01-20T14:58:56.960586154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:58:56.961786 containerd[1676]: time="2026-01-20T14:58:56.961499347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.156925962s" Jan 20 14:58:56.961786 containerd[1676]: time="2026-01-20T14:58:56.961586744Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 14:58:56.964638 containerd[1676]: time="2026-01-20T14:58:56.964544210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 14:58:57.558914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829788148.mount: Deactivated successfully. Jan 20 14:59:01.030789 update_engine[1633]: I20260120 14:59:01.028343 1633 update_attempter.cc:509] Updating boot flags... Jan 20 14:59:03.301634 containerd[1676]: time="2026-01-20T14:59:03.301317181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:03.303627 containerd[1676]: time="2026-01-20T14:59:03.302709796Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 20 14:59:03.305106 containerd[1676]: time="2026-01-20T14:59:03.305009762Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:03.310983 containerd[1676]: time="2026-01-20T14:59:03.310890230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:03.311836 containerd[1676]: time="2026-01-20T14:59:03.311596237Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 6.346811248s" Jan 20 14:59:03.311836 containerd[1676]: time="2026-01-20T14:59:03.311777511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 14:59:07.152572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 14:59:07.156123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:59:07.462437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:59:07.486245 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 14:59:07.624596 kubelet[2359]: E0120 14:59:07.624520 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 14:59:07.629551 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:59:07.630131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 14:59:07.630293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 14:59:07.630986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:59:07.631913 systemd[1]: kubelet.service: Consumed 388ms CPU time, 110.1M memory peak. Jan 20 14:59:07.638373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:59:07.691162 systemd[1]: Reload requested from client PID 2376 ('systemctl') (unit session-6.scope)... Jan 20 14:59:07.691502 systemd[1]: Reloading... Jan 20 14:59:07.854816 zram_generator::config[2418]: No configuration found. Jan 20 14:59:08.262997 systemd[1]: Reloading finished in 570 ms. Jan 20 14:59:08.389926 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 14:59:08.390089 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 14:59:08.390773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:59:08.390828 systemd[1]: kubelet.service: Consumed 200ms CPU time, 98.4M memory peak. Jan 20 14:59:08.393542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:59:08.714542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:59:08.731254 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 14:59:08.822769 kubelet[2469]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 14:59:08.822769 kubelet[2469]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:59:08.823242 kubelet[2469]: I0120 14:59:08.822840 2469 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 14:59:09.700341 kubelet[2469]: I0120 14:59:09.699766 2469 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 14:59:09.700341 kubelet[2469]: I0120 14:59:09.700132 2469 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 14:59:09.700341 kubelet[2469]: I0120 14:59:09.700512 2469 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 14:59:09.700341 kubelet[2469]: I0120 14:59:09.700535 2469 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 14:59:09.702499 kubelet[2469]: I0120 14:59:09.701441 2469 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 14:59:09.838159 kubelet[2469]: I0120 14:59:09.837038 2469 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 14:59:09.857974 kubelet[2469]: E0120 14:59:09.839226 2469 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 14:59:10.022131 kubelet[2469]: I0120 14:59:10.015125 2469 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 14:59:10.094536 kubelet[2469]: I0120 14:59:10.093903 2469 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 14:59:10.096867 kubelet[2469]: I0120 14:59:10.095106 2469 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 14:59:10.096867 kubelet[2469]: I0120 14:59:10.095168 2469 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 14:59:10.096867 kubelet[2469]: I0120 14:59:10.095789 2469 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 14:59:10.096867 kubelet[2469]: I0120 14:59:10.095806 2469 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 14:59:10.097769 kubelet[2469]: I0120 14:59:10.096480 2469 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 14:59:10.112189 kubelet[2469]: I0120 14:59:10.111138 2469 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:59:10.114331 kubelet[2469]: I0120 14:59:10.114133 2469 kubelet.go:475] "Attempting to sync node with API server" Jan 20 14:59:10.114331 kubelet[2469]: I0120 14:59:10.114169 2469 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 14:59:10.114331 kubelet[2469]: I0120 14:59:10.114312 2469 kubelet.go:387] "Adding apiserver pod source" Jan 20 14:59:10.154805 kubelet[2469]: I0120 14:59:10.121582 2469 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 14:59:10.163043 kubelet[2469]: E0120 14:59:10.162339 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 14:59:10.163043 kubelet[2469]: E0120 14:59:10.162514 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 14:59:10.260880 kubelet[2469]: I0120 14:59:10.260265 2469 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 14:59:10.264925 kubelet[2469]: I0120 14:59:10.264206 2469 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 14:59:10.264925 kubelet[2469]: I0120 14:59:10.264471 2469 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 14:59:10.265472 kubelet[2469]: W0120 14:59:10.265340 2469 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 14:59:10.420466 kubelet[2469]: I0120 14:59:10.419885 2469 server.go:1262] "Started kubelet" Jan 20 14:59:10.422001 kubelet[2469]: I0120 14:59:10.421277 2469 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 14:59:10.422001 kubelet[2469]: I0120 14:59:10.421492 2469 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 14:59:10.426797 kubelet[2469]: I0120 14:59:10.426266 2469 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 14:59:10.426797 kubelet[2469]: I0120 14:59:10.426723 2469 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 14:59:10.430737 kubelet[2469]: I0120 14:59:10.430161 2469 server.go:310] "Adding debug handlers to kubelet server" Jan 20 14:59:10.432434 kubelet[2469]: E0120 14:59:10.430813 2469 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c786a0614181b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 14:59:10.413608987 +0000 UTC m=+1.660080882,LastTimestamp:2026-01-20 14:59:10.413608987 +0000 UTC m=+1.660080882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 14:59:10.432434 kubelet[2469]: I0120 14:59:10.432407 2469 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 14:59:10.433087 kubelet[2469]: I0120 14:59:10.432471 2469 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 14:59:10.434916 kubelet[2469]: I0120 14:59:10.434815 2469 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 14:59:10.436342 kubelet[2469]: E0120 14:59:10.436276 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jan 20 14:59:10.437254 kubelet[2469]: E0120 14:59:10.434890 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:10.437254 kubelet[2469]: I0120 14:59:10.437001 2469 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 14:59:10.467340 kubelet[2469]: I0120 14:59:10.466931 2469 reconciler.go:29] "Reconciler: start to sync state" Jan 20 14:59:10.502527 kubelet[2469]: E0120 14:59:10.502067 2469 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 14:59:10.504236 kubelet[2469]: E0120 14:59:10.503123 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 14:59:10.504236 kubelet[2469]: I0120 14:59:10.503731 2469 factory.go:223] Registration of the systemd container factory successfully Jan 20 14:59:10.504236 kubelet[2469]: I0120 14:59:10.503910 2469 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 14:59:10.515937 kubelet[2469]: I0120 14:59:10.515333 2469 factory.go:223] Registration of the containerd container factory successfully Jan 20 14:59:10.606927 kubelet[2469]: E0120 14:59:10.604787 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:10.686062 kubelet[2469]: E0120 14:59:10.680186 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jan 20 14:59:10.730148 kubelet[2469]: E0120 14:59:10.712290 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:10.815604 kubelet[2469]: E0120 14:59:10.815244 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:10.995904 kubelet[2469]: E0120 14:59:10.985195 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:11.096222 kubelet[2469]: E0120 14:59:11.091134 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:11.100972 kubelet[2469]: E0120 14:59:11.100623 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jan 20 14:59:11.189869 kubelet[2469]: E0120 14:59:11.189267 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 14:59:11.206146 kubelet[2469]: E0120 14:59:11.205112 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:11.226449 kubelet[2469]: I0120 14:59:11.225223 2469 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 14:59:11.275188 kubelet[2469]: I0120 14:59:11.228736 2469 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 14:59:11.275188 kubelet[2469]: I0120 14:59:11.229041 2469 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:59:11.283744 kubelet[2469]: I0120 14:59:11.280923 2469 policy_none.go:49] "None policy: Start" Jan 20 14:59:11.283744 kubelet[2469]: I0120 14:59:11.281008 2469 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 14:59:11.283744 kubelet[2469]: I0120 14:59:11.281054 2469 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 14:59:11.283744 kubelet[2469]: I0120 14:59:11.283601 2469 policy_none.go:47] "Start" Jan 20 14:59:11.285949 kubelet[2469]: I0120 14:59:11.285837 2469 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 14:59:11.292629 kubelet[2469]: I0120 14:59:11.292560 2469 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 14:59:11.292629 kubelet[2469]: I0120 14:59:11.292631 2469 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 14:59:11.294466 kubelet[2469]: E0120 14:59:11.293766 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 14:59:11.294466 kubelet[2469]: I0120 14:59:11.293909 2469 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 14:59:11.294466 kubelet[2469]: E0120 14:59:11.293956 2469 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 14:59:11.298270 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 14:59:11.313912 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 14:59:11.317607 kubelet[2469]: E0120 14:59:11.317526 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:11.335222 kubelet[2469]: E0120 14:59:11.334599 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 14:59:11.335908 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 14:59:11.339507 kubelet[2469]: E0120 14:59:11.339212 2469 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 14:59:11.340250 kubelet[2469]: I0120 14:59:11.340169 2469 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 14:59:11.340499 kubelet[2469]: I0120 14:59:11.340304 2469 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 14:59:11.341592 kubelet[2469]: I0120 14:59:11.341265 2469 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 14:59:11.344324 kubelet[2469]: E0120 14:59:11.344206 2469 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 14:59:11.344847 kubelet[2469]: E0120 14:59:11.344782 2469 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 14:59:11.414060 systemd[1]: Created slice kubepods-burstable-pod6c0ff9082b5a648737bb83feed134b46.slice - libcontainer container kubepods-burstable-pod6c0ff9082b5a648737bb83feed134b46.slice. Jan 20 14:59:11.419158 kubelet[2469]: E0120 14:59:11.419083 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 14:59:11.444929 kubelet[2469]: I0120 14:59:11.444546 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:11.445924 kubelet[2469]: E0120 14:59:11.445516 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:11.445924 kubelet[2469]: E0120 14:59:11.445774 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 14:59:11.462838 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 14:59:11.469123 kubelet[2469]: E0120 14:59:11.469092 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:11.469987 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 14:59:11.474534 kubelet[2469]: E0120 14:59:11.474337 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:11.565983 kubelet[2469]: I0120 14:59:11.561764 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:11.565983 kubelet[2469]: I0120 14:59:11.562456 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:11.565983 kubelet[2469]: I0120 14:59:11.562523 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:11.565983 kubelet[2469]: I0120 14:59:11.563144 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:11.565983 kubelet[2469]: I0120 14:59:11.563253 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:11.566957 kubelet[2469]: I0120 14:59:11.563285 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:11.566957 kubelet[2469]: I0120 14:59:11.563308 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:11.566957 kubelet[2469]: I0120 14:59:11.563544 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:11.566957 kubelet[2469]: I0120 14:59:11.563637 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:11.654161 kubelet[2469]: I0120 14:59:11.654116 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:11.655382 kubelet[2469]: E0120 14:59:11.655185 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 14:59:11.813154 kubelet[2469]: E0120 14:59:11.812628 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:11.816305 kubelet[2469]: E0120 14:59:11.816171 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:11.818160 containerd[1676]: time="2026-01-20T14:59:11.818009948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:11.818975 containerd[1676]: time="2026-01-20T14:59:11.818044961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c0ff9082b5a648737bb83feed134b46,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:11.821361 kubelet[2469]: E0120 14:59:11.819211 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:11.830591 containerd[1676]: time="2026-01-20T14:59:11.830314634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:11.903502 kubelet[2469]: E0120 14:59:11.903321 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jan 20 14:59:12.014636 kubelet[2469]: E0120 14:59:12.014556 2469 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 14:59:12.060213 kubelet[2469]: I0120 14:59:12.060024 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:12.060763 kubelet[2469]: E0120 14:59:12.060554 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 14:59:12.526118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28782449.mount: Deactivated successfully. Jan 20 14:59:12.537569 containerd[1676]: time="2026-01-20T14:59:12.537307779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 14:59:12.542761 containerd[1676]: time="2026-01-20T14:59:12.542453805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 14:59:12.545844 containerd[1676]: time="2026-01-20T14:59:12.545630705Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 14:59:12.548765 containerd[1676]: time="2026-01-20T14:59:12.548516277Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 14:59:12.550128 containerd[1676]: time="2026-01-20T14:59:12.549953161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 14:59:12.551362 containerd[1676]: time="2026-01-20T14:59:12.551175762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 14:59:12.553237 containerd[1676]: time="2026-01-20T14:59:12.553072433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 14:59:12.554613 containerd[1676]: time="2026-01-20T14:59:12.554491033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 14:59:12.560528 containerd[1676]: time="2026-01-20T14:59:12.560280966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 725.263932ms" Jan 20 14:59:12.563962 containerd[1676]: time="2026-01-20T14:59:12.563779676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 719.929415ms" Jan 20 14:59:12.567162 containerd[1676]: time="2026-01-20T14:59:12.567090757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 726.786313ms" Jan 20 14:59:12.634398 containerd[1676]: time="2026-01-20T14:59:12.633029106Z" level=info msg="connecting to shim 1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87" address="unix:///run/containerd/s/c79fc53e6cf9e70c0fd61b5d8f8bf78c0893bdacd34268b2fa126c7d16598279" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:12.650559 containerd[1676]: time="2026-01-20T14:59:12.650487760Z" level=info msg="connecting to shim d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229" address="unix:///run/containerd/s/99a98feb2726aea94177ef0641c47dc08619da8f6de5e100275e2eca8b952812" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:12.798119 containerd[1676]: time="2026-01-20T14:59:12.795341796Z" level=info msg="connecting to shim 3503b21131e926da6fbc8f30daa518b99bc93365ad56b1052d04773f5c0d6c81" address="unix:///run/containerd/s/d55a31da77f6ea8541aafcbfea50164fbe358a6ead3c0653b5edf2ebe7e01655" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:12.869285 kubelet[2469]: I0120 14:59:12.868959 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:12.870308 kubelet[2469]: E0120 14:59:12.870042 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 14:59:12.878926 kubelet[2469]: E0120 14:59:12.878623 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 14:59:12.883029 systemd[1]: Started cri-containerd-1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87.scope - libcontainer container 1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87. Jan 20 14:59:13.048047 systemd[1]: Started cri-containerd-3503b21131e926da6fbc8f30daa518b99bc93365ad56b1052d04773f5c0d6c81.scope - libcontainer container 3503b21131e926da6fbc8f30daa518b99bc93365ad56b1052d04773f5c0d6c81. Jan 20 14:59:13.085917 systemd[1]: Started cri-containerd-d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229.scope - libcontainer container d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229. Jan 20 14:59:13.110823 containerd[1676]: time="2026-01-20T14:59:13.110481685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87\"" Jan 20 14:59:13.118403 kubelet[2469]: E0120 14:59:13.118280 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:13.146020 containerd[1676]: time="2026-01-20T14:59:13.145826617Z" level=info msg="CreateContainer within sandbox \"1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 14:59:13.169759 containerd[1676]: time="2026-01-20T14:59:13.168976809Z" level=info msg="Container c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:13.179853 containerd[1676]: time="2026-01-20T14:59:13.179823626Z" level=info msg="CreateContainer within sandbox \"1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d\"" Jan 20 14:59:13.183797 containerd[1676]: time="2026-01-20T14:59:13.183768999Z" level=info msg="StartContainer for \"c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d\"" Jan 20 14:59:13.185955 containerd[1676]: time="2026-01-20T14:59:13.185926157Z" level=info msg="connecting to shim c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d" address="unix:///run/containerd/s/c79fc53e6cf9e70c0fd61b5d8f8bf78c0893bdacd34268b2fa126c7d16598279" protocol=ttrpc version=3 Jan 20 14:59:13.398308 containerd[1676]: time="2026-01-20T14:59:13.396788321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c0ff9082b5a648737bb83feed134b46,Namespace:kube-system,Attempt:0,} returns sandbox id \"3503b21131e926da6fbc8f30daa518b99bc93365ad56b1052d04773f5c0d6c81\"" Jan 20 14:59:13.398768 kubelet[2469]: E0120 14:59:13.398560 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:13.409896 containerd[1676]: time="2026-01-20T14:59:13.409781167Z" level=info msg="CreateContainer within sandbox \"3503b21131e926da6fbc8f30daa518b99bc93365ad56b1052d04773f5c0d6c81\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 14:59:13.413527 systemd[1]: Started cri-containerd-c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d.scope - libcontainer container c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d. Jan 20 14:59:13.443077 containerd[1676]: time="2026-01-20T14:59:13.442936311Z" level=info msg="Container cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:13.461819 containerd[1676]: time="2026-01-20T14:59:13.461627135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229\"" Jan 20 14:59:13.462581 containerd[1676]: time="2026-01-20T14:59:13.462498785Z" level=info msg="CreateContainer within sandbox \"3503b21131e926da6fbc8f30daa518b99bc93365ad56b1052d04773f5c0d6c81\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9\"" Jan 20 14:59:13.464060 containerd[1676]: time="2026-01-20T14:59:13.463924748Z" level=info msg="StartContainer for \"cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9\"" Jan 20 14:59:13.465056 kubelet[2469]: E0120 14:59:13.464982 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:13.470908 containerd[1676]: time="2026-01-20T14:59:13.470868772Z" level=info msg="connecting to shim cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9" address="unix:///run/containerd/s/d55a31da77f6ea8541aafcbfea50164fbe358a6ead3c0653b5edf2ebe7e01655" protocol=ttrpc version=3 Jan 20 14:59:13.482732 containerd[1676]: time="2026-01-20T14:59:13.481747246Z" level=info msg="CreateContainer within sandbox \"d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 14:59:13.499957 containerd[1676]: time="2026-01-20T14:59:13.499852508Z" level=info msg="Container d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:13.508897 kubelet[2469]: E0120 14:59:13.508552 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="3.2s" Jan 20 14:59:13.521762 containerd[1676]: time="2026-01-20T14:59:13.521549370Z" level=info msg="CreateContainer within sandbox \"d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e\"" Jan 20 14:59:13.522888 containerd[1676]: time="2026-01-20T14:59:13.522861779Z" level=info msg="StartContainer for \"d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e\"" Jan 20 14:59:13.524778 containerd[1676]: time="2026-01-20T14:59:13.524622341Z" level=info msg="connecting to shim d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e" address="unix:///run/containerd/s/99a98feb2726aea94177ef0641c47dc08619da8f6de5e100275e2eca8b952812" protocol=ttrpc version=3 Jan 20 14:59:13.552116 systemd[1]: Started cri-containerd-cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9.scope - libcontainer container cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9. Jan 20 14:59:13.596015 systemd[1]: Started cri-containerd-d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e.scope - libcontainer container d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e. Jan 20 14:59:13.683116 containerd[1676]: time="2026-01-20T14:59:13.682913529Z" level=info msg="StartContainer for \"c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d\" returns successfully" Jan 20 14:59:14.320397 kubelet[2469]: E0120 14:59:14.320143 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 14:59:14.325992 kubelet[2469]: E0120 14:59:14.322180 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 14:59:14.394539 containerd[1676]: time="2026-01-20T14:59:14.394404195Z" level=info msg="StartContainer for \"d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e\" returns successfully" Jan 20 14:59:14.396228 containerd[1676]: time="2026-01-20T14:59:14.396204534Z" level=info msg="StartContainer for \"cfeed1046715e7cac7966633b5abdc6dc35c8d59da8f8453ca1644f50250e6d9\" returns successfully" Jan 20 14:59:14.412374 kubelet[2469]: E0120 14:59:14.412246 2469 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 14:59:14.462896 kubelet[2469]: E0120 14:59:14.462761 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:14.463065 kubelet[2469]: E0120 14:59:14.463018 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:14.470356 kubelet[2469]: E0120 14:59:14.470325 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:14.471768 kubelet[2469]: E0120 14:59:14.471174 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:14.478473 kubelet[2469]: I0120 14:59:14.478374 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:14.479177 kubelet[2469]: E0120 14:59:14.478635 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:14.480042 kubelet[2469]: E0120 14:59:14.479973 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 14:59:14.480850 kubelet[2469]: E0120 14:59:14.480764 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:15.492816 kubelet[2469]: E0120 14:59:15.492320 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:15.494118 kubelet[2469]: E0120 14:59:15.493420 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:15.498706 kubelet[2469]: E0120 14:59:15.498046 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:15.498706 kubelet[2469]: E0120 14:59:15.498606 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:15.500081 kubelet[2469]: E0120 14:59:15.500017 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:15.500271 kubelet[2469]: E0120 14:59:15.500197 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:16.502958 kubelet[2469]: E0120 14:59:16.502147 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:16.502958 kubelet[2469]: E0120 14:59:16.502422 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:16.506067 kubelet[2469]: E0120 14:59:16.504107 2469 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 14:59:16.506067 kubelet[2469]: E0120 14:59:16.504246 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:17.699345 kubelet[2469]: I0120 14:59:17.698826 2469 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:19.429394 kubelet[2469]: E0120 14:59:19.428890 2469 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 14:59:19.669827 kubelet[2469]: E0120 14:59:19.669050 2469 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c786a0614181b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 14:59:10.413608987 +0000 UTC m=+1.660080882,LastTimestamp:2026-01-20 14:59:10.413608987 +0000 UTC m=+1.660080882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 14:59:19.690636 kubelet[2469]: I0120 14:59:19.690184 2469 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 14:59:19.690636 kubelet[2469]: E0120 14:59:19.690281 2469 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 14:59:19.764758 kubelet[2469]: E0120 14:59:19.764411 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:19.894921 kubelet[2469]: E0120 14:59:19.890337 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:20.005207 kubelet[2469]: E0120 14:59:20.002501 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:20.113629 kubelet[2469]: E0120 14:59:20.111434 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:20.216843 kubelet[2469]: E0120 14:59:20.215840 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:20.343880 kubelet[2469]: E0120 14:59:20.325467 2469 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 14:59:20.438278 kubelet[2469]: I0120 14:59:20.438044 2469 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:20.478735 kubelet[2469]: I0120 14:59:20.478486 2469 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:20.490558 kubelet[2469]: I0120 14:59:20.490422 2469 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:21.343632 kubelet[2469]: I0120 14:59:21.343405 2469 apiserver.go:52] "Watching apiserver" Jan 20 14:59:21.351342 kubelet[2469]: E0120 14:59:21.351245 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:21.352498 kubelet[2469]: E0120 14:59:21.352415 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:21.352639 kubelet[2469]: E0120 14:59:21.352513 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:21.438179 kubelet[2469]: I0120 14:59:21.437941 2469 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 14:59:22.545131 systemd[1]: Reload requested from client PID 2757 ('systemctl') (unit session-6.scope)... Jan 20 14:59:22.545197 systemd[1]: Reloading... Jan 20 14:59:22.710777 zram_generator::config[2803]: No configuration found. Jan 20 14:59:22.880783 kubelet[2469]: E0120 14:59:22.879922 2469 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:23.172094 systemd[1]: Reloading finished in 626 ms. Jan 20 14:59:23.230940 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:59:23.266470 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 14:59:23.267390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:59:23.267524 systemd[1]: kubelet.service: Consumed 4.769s CPU time, 126.2M memory peak. Jan 20 14:59:23.271956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 14:59:23.659272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 14:59:23.672415 (kubelet)[2848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 14:59:23.860386 kubelet[2848]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 14:59:23.860386 kubelet[2848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:59:23.862096 kubelet[2848]: I0120 14:59:23.861900 2848 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 14:59:23.895143 kubelet[2848]: I0120 14:59:23.895048 2848 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 14:59:23.895143 kubelet[2848]: I0120 14:59:23.895088 2848 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 14:59:23.895143 kubelet[2848]: I0120 14:59:23.895127 2848 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 14:59:23.895143 kubelet[2848]: I0120 14:59:23.895137 2848 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 14:59:23.897451 kubelet[2848]: I0120 14:59:23.897339 2848 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 14:59:23.900951 kubelet[2848]: I0120 14:59:23.900839 2848 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 14:59:23.912157 kubelet[2848]: I0120 14:59:23.911438 2848 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 14:59:23.938720 kubelet[2848]: I0120 14:59:23.938539 2848 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 14:59:23.967179 kubelet[2848]: I0120 14:59:23.967092 2848 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 14:59:23.967955 kubelet[2848]: I0120 14:59:23.967790 2848 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 14:59:23.968238 kubelet[2848]: I0120 14:59:23.967889 2848 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 14:59:23.968238 kubelet[2848]: I0120 14:59:23.968198 2848 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 14:59:23.968238 kubelet[2848]: I0120 14:59:23.968217 2848 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 14:59:23.968814 kubelet[2848]: I0120 14:59:23.968309 2848 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 14:59:23.969434 kubelet[2848]: I0120 14:59:23.969396 2848 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:59:23.969960 kubelet[2848]: I0120 14:59:23.969904 2848 kubelet.go:475] "Attempting to sync node with API server" Jan 20 14:59:23.969996 kubelet[2848]: I0120 14:59:23.969970 2848 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 14:59:23.970030 kubelet[2848]: I0120 14:59:23.970007 2848 kubelet.go:387] "Adding apiserver pod source" Jan 20 14:59:23.970056 kubelet[2848]: I0120 14:59:23.970047 2848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 14:59:23.975256 kubelet[2848]: I0120 14:59:23.975215 2848 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 14:59:23.978731 kubelet[2848]: I0120 14:59:23.978031 2848 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 14:59:23.978731 kubelet[2848]: I0120 14:59:23.978067 2848 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 14:59:23.994396 kubelet[2848]: I0120 14:59:23.994356 2848 server.go:1262] "Started kubelet" Jan 20 14:59:23.997729 kubelet[2848]: I0120 14:59:23.997454 2848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 14:59:23.997729 kubelet[2848]: I0120 14:59:23.997476 2848 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 14:59:23.998012 kubelet[2848]: I0120 14:59:23.997841 2848 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 14:59:24.004771 kubelet[2848]: I0120 14:59:24.004166 2848 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 14:59:24.004771 kubelet[2848]: I0120 14:59:24.004262 2848 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 14:59:24.007448 kubelet[2848]: I0120 14:59:24.007372 2848 server.go:310] "Adding debug handlers to kubelet server" Jan 20 14:59:24.009746 kubelet[2848]: I0120 14:59:24.009403 2848 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 14:59:24.009746 kubelet[2848]: I0120 14:59:24.009501 2848 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 14:59:24.009879 kubelet[2848]: I0120 14:59:24.009870 2848 reconciler.go:29] "Reconciler: start to sync state" Jan 20 14:59:24.127329 kubelet[2848]: I0120 14:59:24.050302 2848 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 14:59:24.127329 kubelet[2848]: I0120 14:59:24.125926 2848 factory.go:223] Registration of the systemd container factory successfully Jan 20 14:59:24.138004 kubelet[2848]: I0120 14:59:24.127243 2848 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 14:59:24.177543 kubelet[2848]: E0120 14:59:24.177178 2848 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 14:59:24.203998 kubelet[2848]: I0120 14:59:24.203786 2848 factory.go:223] Registration of the containerd container factory successfully Jan 20 14:59:24.271278 kubelet[2848]: I0120 14:59:24.268879 2848 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 14:59:24.287102 kubelet[2848]: I0120 14:59:24.286487 2848 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 14:59:24.287102 kubelet[2848]: I0120 14:59:24.286532 2848 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 14:59:24.287102 kubelet[2848]: I0120 14:59:24.286731 2848 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 14:59:24.287102 kubelet[2848]: E0120 14:59:24.286820 2848 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 14:59:24.387482 kubelet[2848]: E0120 14:59:24.387139 2848 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481047 2848 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481133 2848 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481169 2848 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481477 2848 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481496 2848 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481528 2848 policy_none.go:49] "None policy: Start" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481542 2848 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 14:59:24.482243 kubelet[2848]: I0120 14:59:24.481623 2848 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 14:59:24.488456 kubelet[2848]: I0120 14:59:24.487785 2848 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 14:59:24.488456 kubelet[2848]: I0120 14:59:24.487812 2848 policy_none.go:47] "Start" Jan 20 14:59:24.510112 kubelet[2848]: E0120 14:59:24.509992 2848 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 14:59:24.516925 kubelet[2848]: I0120 14:59:24.515770 2848 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 14:59:24.516925 kubelet[2848]: I0120 14:59:24.515941 2848 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 14:59:24.516925 kubelet[2848]: I0120 14:59:24.516786 2848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 14:59:24.529785 kubelet[2848]: E0120 14:59:24.529134 2848 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 14:59:24.589187 kubelet[2848]: I0120 14:59:24.589140 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:24.590008 kubelet[2848]: I0120 14:59:24.589936 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:24.590732 kubelet[2848]: I0120 14:59:24.589236 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.607973 kubelet[2848]: E0120 14:59:24.607920 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:24.608286 kubelet[2848]: E0120 14:59:24.607951 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.608481 kubelet[2848]: E0120 14:59:24.607976 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:24.632626 kubelet[2848]: I0120 14:59:24.632514 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.633435 kubelet[2848]: I0120 14:59:24.633194 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.634001 kubelet[2848]: I0120 14:59:24.633912 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:24.634001 kubelet[2848]: I0120 14:59:24.633979 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.634140 kubelet[2848]: I0120 14:59:24.634008 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:24.634140 kubelet[2848]: I0120 14:59:24.634032 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:24.634140 kubelet[2848]: I0120 14:59:24.634054 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:24.634140 kubelet[2848]: I0120 14:59:24.634075 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.634140 kubelet[2848]: I0120 14:59:24.634095 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:24.671267 kubelet[2848]: I0120 14:59:24.669635 2848 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 14:59:24.721259 kubelet[2848]: I0120 14:59:24.720777 2848 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 14:59:24.721259 kubelet[2848]: I0120 14:59:24.720968 2848 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 14:59:24.910362 kubelet[2848]: E0120 14:59:24.910073 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:24.910362 kubelet[2848]: E0120 14:59:24.910208 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:24.910362 kubelet[2848]: E0120 14:59:24.910261 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:24.972859 kubelet[2848]: I0120 14:59:24.972124 2848 apiserver.go:52] "Watching apiserver" Jan 20 14:59:25.010870 kubelet[2848]: I0120 14:59:25.010748 2848 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 14:59:25.381439 kubelet[2848]: I0120 14:59:25.380393 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:25.381439 kubelet[2848]: I0120 14:59:25.381046 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:25.381944 kubelet[2848]: I0120 14:59:25.381778 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:25.412320 kubelet[2848]: E0120 14:59:25.411905 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 14:59:25.414246 kubelet[2848]: E0120 14:59:25.412958 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 14:59:25.414246 kubelet[2848]: E0120 14:59:25.413099 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:25.414246 kubelet[2848]: E0120 14:59:25.413127 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:25.422232 kubelet[2848]: E0120 14:59:25.422108 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 14:59:25.422468 kubelet[2848]: E0120 14:59:25.422360 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:25.449555 kubelet[2848]: I0120 14:59:25.449408 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.449367885 podStartE2EDuration="5.449367885s" podCreationTimestamp="2026-01-20 14:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:59:25.447503921 +0000 UTC m=+1.735711019" watchObservedRunningTime="2026-01-20 14:59:25.449367885 +0000 UTC m=+1.737574973" Jan 20 14:59:25.519141 kubelet[2848]: I0120 14:59:25.518942 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.518916099 podStartE2EDuration="5.518916099s" podCreationTimestamp="2026-01-20 14:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:59:25.488369767 +0000 UTC m=+1.776576905" watchObservedRunningTime="2026-01-20 14:59:25.518916099 +0000 UTC m=+1.807123187" Jan 20 14:59:25.551350 kubelet[2848]: I0120 14:59:25.550553 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.550528931 podStartE2EDuration="5.550528931s" podCreationTimestamp="2026-01-20 14:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:59:25.519346977 +0000 UTC m=+1.807554065" watchObservedRunningTime="2026-01-20 14:59:25.550528931 +0000 UTC m=+1.838736019" Jan 20 14:59:26.257140 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 20 14:59:26.267285 sshd[1829]: Connection closed by 10.0.0.1 port 58362 Jan 20 14:59:26.270481 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 20 14:59:26.282788 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:58362.service: Deactivated successfully. Jan 20 14:59:26.286966 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 14:59:26.287539 systemd[1]: session-6.scope: Consumed 9.801s CPU time, 205.4M memory peak. Jan 20 14:59:26.290757 systemd-logind[1631]: Session 6 logged out. Waiting for processes to exit. Jan 20 14:59:26.294306 systemd-logind[1631]: Removed session 6. Jan 20 14:59:26.383333 kubelet[2848]: E0120 14:59:26.383290 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:26.384544 kubelet[2848]: E0120 14:59:26.383444 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:26.384544 kubelet[2848]: E0120 14:59:26.383820 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:27.388842 kubelet[2848]: E0120 14:59:27.387861 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:27.388842 kubelet[2848]: E0120 14:59:27.388049 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:28.867040 kubelet[2848]: I0120 14:59:28.866865 2848 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 14:59:28.868910 containerd[1676]: time="2026-01-20T14:59:28.868432611Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 14:59:28.869374 kubelet[2848]: I0120 14:59:28.869019 2848 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 14:59:29.721214 systemd[1]: Created slice kubepods-besteffort-pod5699fac1_6b1f_4b42_9875_5514e6e981a5.slice - libcontainer container kubepods-besteffort-pod5699fac1_6b1f_4b42_9875_5514e6e981a5.slice. Jan 20 14:59:29.744257 systemd[1]: Created slice kubepods-burstable-poda12baccc_d3d7_4ea6_b473_d6b46d700fe0.slice - libcontainer container kubepods-burstable-poda12baccc_d3d7_4ea6_b473_d6b46d700fe0.slice. Jan 20 14:59:29.776178 kubelet[2848]: I0120 14:59:29.776022 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a12baccc-d3d7-4ea6-b473-d6b46d700fe0-xtables-lock\") pod \"kube-flannel-ds-97cmr\" (UID: \"a12baccc-d3d7-4ea6-b473-d6b46d700fe0\") " pod="kube-flannel/kube-flannel-ds-97cmr" Jan 20 14:59:29.776178 kubelet[2848]: I0120 14:59:29.776135 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd2l\" (UniqueName: \"kubernetes.io/projected/a12baccc-d3d7-4ea6-b473-d6b46d700fe0-kube-api-access-2kd2l\") pod \"kube-flannel-ds-97cmr\" (UID: \"a12baccc-d3d7-4ea6-b473-d6b46d700fe0\") " pod="kube-flannel/kube-flannel-ds-97cmr" Jan 20 14:59:29.776482 kubelet[2848]: I0120 14:59:29.776200 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5699fac1-6b1f-4b42-9875-5514e6e981a5-xtables-lock\") pod \"kube-proxy-8p6n6\" (UID: \"5699fac1-6b1f-4b42-9875-5514e6e981a5\") " pod="kube-system/kube-proxy-8p6n6" Jan 20 14:59:29.776482 kubelet[2848]: I0120 14:59:29.776232 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fq95\" (UniqueName: \"kubernetes.io/projected/5699fac1-6b1f-4b42-9875-5514e6e981a5-kube-api-access-2fq95\") pod \"kube-proxy-8p6n6\" (UID: \"5699fac1-6b1f-4b42-9875-5514e6e981a5\") " pod="kube-system/kube-proxy-8p6n6" Jan 20 14:59:29.776482 kubelet[2848]: I0120 14:59:29.776285 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a12baccc-d3d7-4ea6-b473-d6b46d700fe0-cni\") pod \"kube-flannel-ds-97cmr\" (UID: \"a12baccc-d3d7-4ea6-b473-d6b46d700fe0\") " pod="kube-flannel/kube-flannel-ds-97cmr" Jan 20 14:59:29.776482 kubelet[2848]: I0120 14:59:29.776320 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5699fac1-6b1f-4b42-9875-5514e6e981a5-kube-proxy\") pod \"kube-proxy-8p6n6\" (UID: \"5699fac1-6b1f-4b42-9875-5514e6e981a5\") " pod="kube-system/kube-proxy-8p6n6" Jan 20 14:59:29.776482 kubelet[2848]: I0120 14:59:29.776347 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5699fac1-6b1f-4b42-9875-5514e6e981a5-lib-modules\") pod \"kube-proxy-8p6n6\" (UID: \"5699fac1-6b1f-4b42-9875-5514e6e981a5\") " pod="kube-system/kube-proxy-8p6n6" Jan 20 14:59:29.777295 kubelet[2848]: I0120 14:59:29.776368 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a12baccc-d3d7-4ea6-b473-d6b46d700fe0-run\") pod \"kube-flannel-ds-97cmr\" (UID: \"a12baccc-d3d7-4ea6-b473-d6b46d700fe0\") " pod="kube-flannel/kube-flannel-ds-97cmr" Jan 20 14:59:29.777295 kubelet[2848]: I0120 14:59:29.776393 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a12baccc-d3d7-4ea6-b473-d6b46d700fe0-cni-plugin\") pod \"kube-flannel-ds-97cmr\" (UID: \"a12baccc-d3d7-4ea6-b473-d6b46d700fe0\") " pod="kube-flannel/kube-flannel-ds-97cmr" Jan 20 14:59:29.777295 kubelet[2848]: I0120 14:59:29.776418 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a12baccc-d3d7-4ea6-b473-d6b46d700fe0-flannel-cfg\") pod \"kube-flannel-ds-97cmr\" (UID: \"a12baccc-d3d7-4ea6-b473-d6b46d700fe0\") " pod="kube-flannel/kube-flannel-ds-97cmr" Jan 20 14:59:30.044357 kubelet[2848]: E0120 14:59:30.044151 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:30.047071 containerd[1676]: time="2026-01-20T14:59:30.046987671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8p6n6,Uid:5699fac1-6b1f-4b42-9875-5514e6e981a5,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:30.062777 kubelet[2848]: E0120 14:59:30.062480 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:30.063177 containerd[1676]: time="2026-01-20T14:59:30.063081076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-97cmr,Uid:a12baccc-d3d7-4ea6-b473-d6b46d700fe0,Namespace:kube-flannel,Attempt:0,}" Jan 20 14:59:30.122357 containerd[1676]: time="2026-01-20T14:59:30.122200276Z" level=info msg="connecting to shim ea234dc7fa3c5af4baf0cbd3841950db6cb4b1488646a4ab0d64acb8f8fdef1c" address="unix:///run/containerd/s/74f5d899d6edeb5e13202b2dd492a41ad209faab06142318ab3e9ab3d4062b86" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:30.122849 containerd[1676]: time="2026-01-20T14:59:30.122426692Z" level=info msg="connecting to shim d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb" address="unix:///run/containerd/s/7a1398cfc1de55e3359fb8739d6be5fa1cf3a815675ffa702c5d3f9e0d91e6c3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:30.242066 systemd[1]: Started cri-containerd-ea234dc7fa3c5af4baf0cbd3841950db6cb4b1488646a4ab0d64acb8f8fdef1c.scope - libcontainer container ea234dc7fa3c5af4baf0cbd3841950db6cb4b1488646a4ab0d64acb8f8fdef1c. Jan 20 14:59:30.250871 systemd[1]: Started cri-containerd-d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb.scope - libcontainer container d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb. Jan 20 14:59:30.380231 containerd[1676]: time="2026-01-20T14:59:30.380074334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8p6n6,Uid:5699fac1-6b1f-4b42-9875-5514e6e981a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea234dc7fa3c5af4baf0cbd3841950db6cb4b1488646a4ab0d64acb8f8fdef1c\"" Jan 20 14:59:30.382410 containerd[1676]: time="2026-01-20T14:59:30.382353138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-97cmr,Uid:a12baccc-d3d7-4ea6-b473-d6b46d700fe0,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\"" Jan 20 14:59:30.384170 kubelet[2848]: E0120 14:59:30.384014 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:30.385790 kubelet[2848]: E0120 14:59:30.385374 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:30.390212 containerd[1676]: time="2026-01-20T14:59:30.390082747Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 20 14:59:30.398033 containerd[1676]: time="2026-01-20T14:59:30.397935201Z" level=info msg="CreateContainer within sandbox \"ea234dc7fa3c5af4baf0cbd3841950db6cb4b1488646a4ab0d64acb8f8fdef1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 14:59:30.424945 containerd[1676]: time="2026-01-20T14:59:30.424592392Z" level=info msg="Container 38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:30.443315 containerd[1676]: time="2026-01-20T14:59:30.443205835Z" level=info msg="CreateContainer within sandbox \"ea234dc7fa3c5af4baf0cbd3841950db6cb4b1488646a4ab0d64acb8f8fdef1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3\"" Jan 20 14:59:30.445806 containerd[1676]: time="2026-01-20T14:59:30.445445063Z" level=info msg="StartContainer for \"38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3\"" Jan 20 14:59:30.450838 containerd[1676]: time="2026-01-20T14:59:30.450741357Z" level=info msg="connecting to shim 38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3" address="unix:///run/containerd/s/74f5d899d6edeb5e13202b2dd492a41ad209faab06142318ab3e9ab3d4062b86" protocol=ttrpc version=3 Jan 20 14:59:30.492055 systemd[1]: Started cri-containerd-38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3.scope - libcontainer container 38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3. Jan 20 14:59:30.659940 containerd[1676]: time="2026-01-20T14:59:30.659841415Z" level=info msg="StartContainer for \"38d780174e724659a311d7d2ffda3a7adf35728180857be7eaa47f90e98590d3\" returns successfully" Jan 20 14:59:31.027862 kubelet[2848]: E0120 14:59:31.026392 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:31.193086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099186734.mount: Deactivated successfully. Jan 20 14:59:31.295906 containerd[1676]: time="2026-01-20T14:59:31.295088069Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:31.298390 containerd[1676]: time="2026-01-20T14:59:31.298199766Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Jan 20 14:59:31.300103 containerd[1676]: time="2026-01-20T14:59:31.300012837Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:31.304919 containerd[1676]: time="2026-01-20T14:59:31.304842637Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:31.306984 containerd[1676]: time="2026-01-20T14:59:31.306851257Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 916.397667ms" Jan 20 14:59:31.307087 containerd[1676]: time="2026-01-20T14:59:31.307042415Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 20 14:59:31.319459 containerd[1676]: time="2026-01-20T14:59:31.319204648Z" level=info msg="CreateContainer within sandbox \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 14:59:31.339881 containerd[1676]: time="2026-01-20T14:59:31.339184726Z" level=info msg="Container 29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:31.353578 containerd[1676]: time="2026-01-20T14:59:31.353442003Z" level=info msg="CreateContainer within sandbox \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60\"" Jan 20 14:59:31.357808 containerd[1676]: time="2026-01-20T14:59:31.357104823Z" level=info msg="StartContainer for \"29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60\"" Jan 20 14:59:31.360327 containerd[1676]: time="2026-01-20T14:59:31.360298540Z" level=info msg="connecting to shim 29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60" address="unix:///run/containerd/s/7a1398cfc1de55e3359fb8739d6be5fa1cf3a815675ffa702c5d3f9e0d91e6c3" protocol=ttrpc version=3 Jan 20 14:59:31.419209 systemd[1]: Started cri-containerd-29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60.scope - libcontainer container 29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60. Jan 20 14:59:31.442869 kubelet[2848]: E0120 14:59:31.442817 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:31.446084 kubelet[2848]: E0120 14:59:31.445390 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:31.472376 kubelet[2848]: I0120 14:59:31.472277 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8p6n6" podStartSLOduration=2.472259261 podStartE2EDuration="2.472259261s" podCreationTimestamp="2026-01-20 14:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:59:31.471863531 +0000 UTC m=+7.760070619" watchObservedRunningTime="2026-01-20 14:59:31.472259261 +0000 UTC m=+7.760466349" Jan 20 14:59:31.594298 systemd[1]: cri-containerd-29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60.scope: Deactivated successfully. Jan 20 14:59:31.602267 containerd[1676]: time="2026-01-20T14:59:31.602173423Z" level=info msg="StartContainer for \"29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60\" returns successfully" Jan 20 14:59:31.602935 containerd[1676]: time="2026-01-20T14:59:31.602759920Z" level=info msg="received container exit event container_id:\"29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60\" id:\"29af909ce7bcb1c242b6f3953ea3eb1e2f8f2dd71fff5cb6837b3264478bcd60\" pid:3152 exited_at:{seconds:1768921171 nanos:600306498}" Jan 20 14:59:32.451437 kubelet[2848]: E0120 14:59:32.451286 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:32.453126 kubelet[2848]: E0120 14:59:32.451382 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:32.453126 kubelet[2848]: E0120 14:59:32.452251 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:32.454511 containerd[1676]: time="2026-01-20T14:59:32.454427123Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 20 14:59:33.595049 kubelet[2848]: E0120 14:59:33.594612 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:34.500892 kubelet[2848]: E0120 14:59:34.500799 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:35.195885 containerd[1676]: time="2026-01-20T14:59:35.195518348Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:35.197271 containerd[1676]: time="2026-01-20T14:59:35.197150609Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=13919223" Jan 20 14:59:35.200111 containerd[1676]: time="2026-01-20T14:59:35.199997718Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:35.204570 containerd[1676]: time="2026-01-20T14:59:35.204365452Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 14:59:35.205974 containerd[1676]: time="2026-01-20T14:59:35.205898934Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.751412629s" Jan 20 14:59:35.205974 containerd[1676]: time="2026-01-20T14:59:35.205946082Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 20 14:59:35.223782 containerd[1676]: time="2026-01-20T14:59:35.223294681Z" level=info msg="CreateContainer within sandbox \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 14:59:35.250764 containerd[1676]: time="2026-01-20T14:59:35.250384928Z" level=info msg="Container 1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:35.265369 containerd[1676]: time="2026-01-20T14:59:35.265257979Z" level=info msg="CreateContainer within sandbox \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1\"" Jan 20 14:59:35.267821 containerd[1676]: time="2026-01-20T14:59:35.267778790Z" level=info msg="StartContainer for \"1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1\"" Jan 20 14:59:35.269898 containerd[1676]: time="2026-01-20T14:59:35.269541056Z" level=info msg="connecting to shim 1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1" address="unix:///run/containerd/s/7a1398cfc1de55e3359fb8739d6be5fa1cf3a815675ffa702c5d3f9e0d91e6c3" protocol=ttrpc version=3 Jan 20 14:59:35.339063 systemd[1]: Started cri-containerd-1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1.scope - libcontainer container 1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1. Jan 20 14:59:35.429903 systemd[1]: cri-containerd-1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1.scope: Deactivated successfully. Jan 20 14:59:35.432277 containerd[1676]: time="2026-01-20T14:59:35.432186154Z" level=info msg="received container exit event container_id:\"1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1\" id:\"1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1\" pid:3279 exited_at:{seconds:1768921175 nanos:431121613}" Jan 20 14:59:35.437232 containerd[1676]: time="2026-01-20T14:59:35.437008987Z" level=info msg="StartContainer for \"1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1\" returns successfully" Jan 20 14:59:35.466916 kubelet[2848]: I0120 14:59:35.465898 2848 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 14:59:35.513616 kubelet[2848]: E0120 14:59:35.513506 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:35.537490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad055fe151561b23f94a6cc1fc6bb497d9b6a49438a66a0e7bd8b9a5d7c54b1-rootfs.mount: Deactivated successfully. Jan 20 14:59:35.564027 systemd[1]: Created slice kubepods-burstable-pod3848f0ae_2615_4524_a966_436353122740.slice - libcontainer container kubepods-burstable-pod3848f0ae_2615_4524_a966_436353122740.slice. Jan 20 14:59:35.576134 systemd[1]: Created slice kubepods-burstable-pod655e13a9_6228_482a_8543_25336ef03b69.slice - libcontainer container kubepods-burstable-pod655e13a9_6228_482a_8543_25336ef03b69.slice. Jan 20 14:59:35.631777 kubelet[2848]: I0120 14:59:35.631490 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3848f0ae-2615-4524-a966-436353122740-config-volume\") pod \"coredns-66bc5c9577-xt5mn\" (UID: \"3848f0ae-2615-4524-a966-436353122740\") " pod="kube-system/coredns-66bc5c9577-xt5mn" Jan 20 14:59:35.631777 kubelet[2848]: I0120 14:59:35.631584 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/655e13a9-6228-482a-8543-25336ef03b69-config-volume\") pod \"coredns-66bc5c9577-g4j2c\" (UID: \"655e13a9-6228-482a-8543-25336ef03b69\") " pod="kube-system/coredns-66bc5c9577-g4j2c" Jan 20 14:59:35.631777 kubelet[2848]: I0120 14:59:35.631605 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bkg\" (UniqueName: \"kubernetes.io/projected/655e13a9-6228-482a-8543-25336ef03b69-kube-api-access-d2bkg\") pod \"coredns-66bc5c9577-g4j2c\" (UID: \"655e13a9-6228-482a-8543-25336ef03b69\") " pod="kube-system/coredns-66bc5c9577-g4j2c" Jan 20 14:59:35.632060 kubelet[2848]: I0120 14:59:35.631624 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg5f7\" (UniqueName: \"kubernetes.io/projected/3848f0ae-2615-4524-a966-436353122740-kube-api-access-hg5f7\") pod \"coredns-66bc5c9577-xt5mn\" (UID: \"3848f0ae-2615-4524-a966-436353122740\") " pod="kube-system/coredns-66bc5c9577-xt5mn" Jan 20 14:59:35.883910 kubelet[2848]: E0120 14:59:35.881209 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:35.887731 kubelet[2848]: E0120 14:59:35.887430 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:35.889322 containerd[1676]: time="2026-01-20T14:59:35.888973514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4j2c,Uid:655e13a9-6228-482a-8543-25336ef03b69,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:35.890605 containerd[1676]: time="2026-01-20T14:59:35.890504024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xt5mn,Uid:3848f0ae-2615-4524-a966-436353122740,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:35.975374 containerd[1676]: time="2026-01-20T14:59:35.975004052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4j2c,Uid:655e13a9-6228-482a-8543-25336ef03b69,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f509cbe9864b24654429c52456b7d5645bd0bc4ba07c620bab293911038d3b8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 14:59:35.976471 kubelet[2848]: E0120 14:59:35.976152 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f509cbe9864b24654429c52456b7d5645bd0bc4ba07c620bab293911038d3b8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 14:59:35.976471 kubelet[2848]: E0120 14:59:35.976393 2848 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f509cbe9864b24654429c52456b7d5645bd0bc4ba07c620bab293911038d3b8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-g4j2c" Jan 20 14:59:35.976471 kubelet[2848]: E0120 14:59:35.976438 2848 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f509cbe9864b24654429c52456b7d5645bd0bc4ba07c620bab293911038d3b8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-g4j2c" Jan 20 14:59:35.976870 kubelet[2848]: E0120 14:59:35.976572 2848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-g4j2c_kube-system(655e13a9-6228-482a-8543-25336ef03b69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-g4j2c_kube-system(655e13a9-6228-482a-8543-25336ef03b69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f509cbe9864b24654429c52456b7d5645bd0bc4ba07c620bab293911038d3b8f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-g4j2c" podUID="655e13a9-6228-482a-8543-25336ef03b69" Jan 20 14:59:35.977757 containerd[1676]: time="2026-01-20T14:59:35.977560573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xt5mn,Uid:3848f0ae-2615-4524-a966-436353122740,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c668f0851ee8b9091c6555e02e62e7d6004d406c18f1341f69e79da65c2eb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 14:59:35.978423 kubelet[2848]: E0120 14:59:35.978133 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c668f0851ee8b9091c6555e02e62e7d6004d406c18f1341f69e79da65c2eb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 14:59:35.978423 kubelet[2848]: E0120 14:59:35.978210 2848 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c668f0851ee8b9091c6555e02e62e7d6004d406c18f1341f69e79da65c2eb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-xt5mn" Jan 20 14:59:35.978423 kubelet[2848]: E0120 14:59:35.978229 2848 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c668f0851ee8b9091c6555e02e62e7d6004d406c18f1341f69e79da65c2eb8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-xt5mn" Jan 20 14:59:35.978423 kubelet[2848]: E0120 14:59:35.978270 2848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xt5mn_kube-system(3848f0ae-2615-4524-a966-436353122740)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xt5mn_kube-system(3848f0ae-2615-4524-a966-436353122740)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77c668f0851ee8b9091c6555e02e62e7d6004d406c18f1341f69e79da65c2eb8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-xt5mn" podUID="3848f0ae-2615-4524-a966-436353122740" Jan 20 14:59:36.531051 kubelet[2848]: E0120 14:59:36.530329 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:36.543376 containerd[1676]: time="2026-01-20T14:59:36.543328191Z" level=info msg="CreateContainer within sandbox \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 14:59:36.566758 containerd[1676]: time="2026-01-20T14:59:36.566208358Z" level=info msg="Container 9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:36.580807 containerd[1676]: time="2026-01-20T14:59:36.580548802Z" level=info msg="CreateContainer within sandbox \"d603a312d72241cb10c045c69ed8ae96fd55cb1a7c0ac453a6e58bb0f8146fbb\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4\"" Jan 20 14:59:36.582555 containerd[1676]: time="2026-01-20T14:59:36.582475186Z" level=info msg="StartContainer for \"9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4\"" Jan 20 14:59:36.584297 containerd[1676]: time="2026-01-20T14:59:36.584167471Z" level=info msg="connecting to shim 9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4" address="unix:///run/containerd/s/7a1398cfc1de55e3359fb8739d6be5fa1cf3a815675ffa702c5d3f9e0d91e6c3" protocol=ttrpc version=3 Jan 20 14:59:36.639229 systemd[1]: Started cri-containerd-9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4.scope - libcontainer container 9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4. Jan 20 14:59:36.811595 containerd[1676]: time="2026-01-20T14:59:36.810536001Z" level=info msg="StartContainer for \"9ee254052f63f97140220beaaebf91efe72f3e10a72662671ee6cb019538dcf4\" returns successfully" Jan 20 14:59:37.544999 kubelet[2848]: E0120 14:59:37.544511 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:37.961948 systemd-networkd[1317]: flannel.1: Link UP Jan 20 14:59:37.961964 systemd-networkd[1317]: flannel.1: Gained carrier Jan 20 14:59:38.548964 kubelet[2848]: E0120 14:59:38.548814 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:39.960423 systemd-networkd[1317]: flannel.1: Gained IPv6LL Jan 20 14:59:48.294820 kubelet[2848]: E0120 14:59:48.294342 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:48.295860 containerd[1676]: time="2026-01-20T14:59:48.295375172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4j2c,Uid:655e13a9-6228-482a-8543-25336ef03b69,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:48.347548 systemd-networkd[1317]: cni0: Link UP Jan 20 14:59:48.347560 systemd-networkd[1317]: cni0: Gained carrier Jan 20 14:59:48.368246 systemd-networkd[1317]: cni0: Lost carrier Jan 20 14:59:48.375467 systemd-networkd[1317]: veth6c91419e: Link UP Jan 20 14:59:48.386616 kernel: cni0: port 1(veth6c91419e) entered blocking state Jan 20 14:59:48.388187 kernel: cni0: port 1(veth6c91419e) entered disabled state Jan 20 14:59:48.398539 kernel: veth6c91419e: entered allmulticast mode Jan 20 14:59:48.399052 kernel: veth6c91419e: entered promiscuous mode Jan 20 14:59:48.426427 kernel: cni0: port 1(veth6c91419e) entered blocking state Jan 20 14:59:48.426856 kernel: cni0: port 1(veth6c91419e) entered forwarding state Jan 20 14:59:48.427507 systemd-networkd[1317]: veth6c91419e: Gained carrier Jan 20 14:59:48.429339 systemd-networkd[1317]: cni0: Gained carrier Jan 20 14:59:48.435171 containerd[1676]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000112950), "name":"cbr0", "type":"bridge"} Jan 20 14:59:48.435171 containerd[1676]: delegateAdd: netconf sent to delegate plugin: Jan 20 14:59:48.508860 containerd[1676]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T14:59:48.508479423Z" level=info msg="connecting to shim 6a46604ac258cdfe6d84350eb1d1e5e5042717b1ef98180527f2eefa1636128b" address="unix:///run/containerd/s/dfb24b0f1e8126bad6b0e241434fb1d25f753e720c31dbd70a214e069351ad65" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:48.590254 systemd[1]: Started cri-containerd-6a46604ac258cdfe6d84350eb1d1e5e5042717b1ef98180527f2eefa1636128b.scope - libcontainer container 6a46604ac258cdfe6d84350eb1d1e5e5042717b1ef98180527f2eefa1636128b. Jan 20 14:59:48.636992 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 14:59:48.712049 containerd[1676]: time="2026-01-20T14:59:48.711937603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4j2c,Uid:655e13a9-6228-482a-8543-25336ef03b69,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a46604ac258cdfe6d84350eb1d1e5e5042717b1ef98180527f2eefa1636128b\"" Jan 20 14:59:48.714228 kubelet[2848]: E0120 14:59:48.714073 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:48.724071 containerd[1676]: time="2026-01-20T14:59:48.722969934Z" level=info msg="CreateContainer within sandbox \"6a46604ac258cdfe6d84350eb1d1e5e5042717b1ef98180527f2eefa1636128b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 14:59:48.743812 containerd[1676]: time="2026-01-20T14:59:48.743351956Z" level=info msg="Container 039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:48.749237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170409134.mount: Deactivated successfully. Jan 20 14:59:48.753116 containerd[1676]: time="2026-01-20T14:59:48.753028677Z" level=info msg="CreateContainer within sandbox \"6a46604ac258cdfe6d84350eb1d1e5e5042717b1ef98180527f2eefa1636128b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c\"" Jan 20 14:59:48.754569 containerd[1676]: time="2026-01-20T14:59:48.754467486Z" level=info msg="StartContainer for \"039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c\"" Jan 20 14:59:48.756411 containerd[1676]: time="2026-01-20T14:59:48.756278541Z" level=info msg="connecting to shim 039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c" address="unix:///run/containerd/s/dfb24b0f1e8126bad6b0e241434fb1d25f753e720c31dbd70a214e069351ad65" protocol=ttrpc version=3 Jan 20 14:59:48.806114 systemd[1]: Started cri-containerd-039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c.scope - libcontainer container 039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c. Jan 20 14:59:48.889077 containerd[1676]: time="2026-01-20T14:59:48.888923963Z" level=info msg="StartContainer for \"039af311f1a564d46e0efba36da8775b1953017ab3f641414ce1fcc3baa96f3c\" returns successfully" Jan 20 14:59:49.292289 kubelet[2848]: E0120 14:59:49.292124 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:49.294140 containerd[1676]: time="2026-01-20T14:59:49.293944054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xt5mn,Uid:3848f0ae-2615-4524-a966-436353122740,Namespace:kube-system,Attempt:0,}" Jan 20 14:59:49.337273 systemd-networkd[1317]: veth5ede9699: Link UP Jan 20 14:59:49.349402 kernel: cni0: port 2(veth5ede9699) entered blocking state Jan 20 14:59:49.349904 kernel: cni0: port 2(veth5ede9699) entered disabled state Jan 20 14:59:49.349962 kernel: veth5ede9699: entered allmulticast mode Jan 20 14:59:49.357085 kernel: veth5ede9699: entered promiscuous mode Jan 20 14:59:49.386855 kernel: cni0: port 2(veth5ede9699) entered blocking state Jan 20 14:59:49.387076 kernel: cni0: port 2(veth5ede9699) entered forwarding state Jan 20 14:59:49.387092 systemd-networkd[1317]: veth5ede9699: Gained carrier Jan 20 14:59:49.396036 containerd[1676]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Jan 20 14:59:49.396036 containerd[1676]: delegateAdd: netconf sent to delegate plugin: Jan 20 14:59:49.527391 containerd[1676]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T14:59:49.527340790Z" level=info msg="connecting to shim c6b2cfb688b82af43a9d3db126adbab7b025f328941359183ca4927ba4d2f667" address="unix:///run/containerd/s/8beccf77b54ad0faacc3df1dad174658c84518647f2b51c9ed166af0e21c2d01" namespace=k8s.io protocol=ttrpc version=3 Jan 20 14:59:49.615219 systemd[1]: Started cri-containerd-c6b2cfb688b82af43a9d3db126adbab7b025f328941359183ca4927ba4d2f667.scope - libcontainer container c6b2cfb688b82af43a9d3db126adbab7b025f328941359183ca4927ba4d2f667. Jan 20 14:59:49.662972 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 14:59:49.713876 kubelet[2848]: E0120 14:59:49.713471 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:49.757497 kubelet[2848]: I0120 14:59:49.757410 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-97cmr" podStartSLOduration=15.938789608 podStartE2EDuration="20.757395119s" podCreationTimestamp="2026-01-20 14:59:29 +0000 UTC" firstStartedPulling="2026-01-20 14:59:30.389233027 +0000 UTC m=+6.677440115" lastFinishedPulling="2026-01-20 14:59:35.207838538 +0000 UTC m=+11.496045626" observedRunningTime="2026-01-20 14:59:37.565516934 +0000 UTC m=+13.853724042" watchObservedRunningTime="2026-01-20 14:59:49.757395119 +0000 UTC m=+26.045602208" Jan 20 14:59:49.757829 kubelet[2848]: I0120 14:59:49.757528 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g4j2c" podStartSLOduration=20.757524491 podStartE2EDuration="20.757524491s" podCreationTimestamp="2026-01-20 14:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:59:49.756855511 +0000 UTC m=+26.045062619" watchObservedRunningTime="2026-01-20 14:59:49.757524491 +0000 UTC m=+26.045731579" Jan 20 14:59:49.763161 containerd[1676]: time="2026-01-20T14:59:49.760620532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xt5mn,Uid:3848f0ae-2615-4524-a966-436353122740,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6b2cfb688b82af43a9d3db126adbab7b025f328941359183ca4927ba4d2f667\"" Jan 20 14:59:49.765330 kubelet[2848]: E0120 14:59:49.765306 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:49.780257 containerd[1676]: time="2026-01-20T14:59:49.780207454Z" level=info msg="CreateContainer within sandbox \"c6b2cfb688b82af43a9d3db126adbab7b025f328941359183ca4927ba4d2f667\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 14:59:49.821507 containerd[1676]: time="2026-01-20T14:59:49.820630844Z" level=info msg="Container c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4: CDI devices from CRI Config.CDIDevices: []" Jan 20 14:59:49.824145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753759335.mount: Deactivated successfully. Jan 20 14:59:49.852264 containerd[1676]: time="2026-01-20T14:59:49.852078422Z" level=info msg="CreateContainer within sandbox \"c6b2cfb688b82af43a9d3db126adbab7b025f328941359183ca4927ba4d2f667\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4\"" Jan 20 14:59:49.854149 containerd[1676]: time="2026-01-20T14:59:49.853614580Z" level=info msg="StartContainer for \"c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4\"" Jan 20 14:59:49.856100 containerd[1676]: time="2026-01-20T14:59:49.856025055Z" level=info msg="connecting to shim c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4" address="unix:///run/containerd/s/8beccf77b54ad0faacc3df1dad174658c84518647f2b51c9ed166af0e21c2d01" protocol=ttrpc version=3 Jan 20 14:59:49.916194 systemd[1]: Started cri-containerd-c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4.scope - libcontainer container c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4. Jan 20 14:59:50.004993 containerd[1676]: time="2026-01-20T14:59:50.004625632Z" level=info msg="StartContainer for \"c20dd2e055d1b15c95a4fe4ea5e7107b759c2c62b236c08efab2fc3312b550f4\" returns successfully" Jan 20 14:59:50.123230 systemd-networkd[1317]: veth6c91419e: Gained IPv6LL Jan 20 14:59:50.251173 systemd-networkd[1317]: cni0: Gained IPv6LL Jan 20 14:59:50.706032 systemd-networkd[1317]: veth5ede9699: Gained IPv6LL Jan 20 14:59:50.754336 kubelet[2848]: E0120 14:59:50.754170 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:50.754336 kubelet[2848]: E0120 14:59:50.754220 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:50.782552 kubelet[2848]: I0120 14:59:50.781228 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xt5mn" podStartSLOduration=21.781204019 podStartE2EDuration="21.781204019s" podCreationTimestamp="2026-01-20 14:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:59:50.779186543 +0000 UTC m=+27.067393651" watchObservedRunningTime="2026-01-20 14:59:50.781204019 +0000 UTC m=+27.069411137" Jan 20 14:59:51.759020 kubelet[2848]: E0120 14:59:51.758357 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:51.759020 kubelet[2848]: E0120 14:59:51.758524 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 14:59:52.762796 kubelet[2848]: E0120 14:59:52.762507 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:14.742156 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:39546.service - OpenSSH per-connection server daemon (10.0.0.1:39546). Jan 20 15:00:14.866298 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 39546 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:14.870336 sshd-session[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:14.893368 systemd-logind[1631]: New session 7 of user core. Jan 20 15:00:14.903164 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 15:00:15.108181 sshd[3857]: Connection closed by 10.0.0.1 port 39546 Jan 20 15:00:15.108321 sshd-session[3853]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:15.116220 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:39546.service: Deactivated successfully. Jan 20 15:00:15.119282 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 15:00:15.123632 systemd-logind[1631]: Session 7 logged out. Waiting for processes to exit. Jan 20 15:00:15.126427 systemd-logind[1631]: Removed session 7. Jan 20 15:00:20.149135 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:39552.service - OpenSSH per-connection server daemon (10.0.0.1:39552). Jan 20 15:00:20.244896 sshd[3892]: Accepted publickey for core from 10.0.0.1 port 39552 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:20.249075 sshd-session[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:20.260561 systemd-logind[1631]: New session 8 of user core. Jan 20 15:00:20.270290 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 15:00:20.497594 sshd[3896]: Connection closed by 10.0.0.1 port 39552 Jan 20 15:00:20.499148 sshd-session[3892]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:20.512268 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:39552.service: Deactivated successfully. Jan 20 15:00:20.519081 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 15:00:20.521304 systemd-logind[1631]: Session 8 logged out. Waiting for processes to exit. Jan 20 15:00:20.524957 systemd-logind[1631]: Removed session 8. Jan 20 15:00:25.512506 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:50250.service - OpenSSH per-connection server daemon (10.0.0.1:50250). Jan 20 15:00:25.616425 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 50250 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:25.619554 sshd-session[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:25.629362 systemd-logind[1631]: New session 9 of user core. Jan 20 15:00:25.643272 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 15:00:25.827616 sshd[3936]: Connection closed by 10.0.0.1 port 50250 Jan 20 15:00:25.827999 sshd-session[3932]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:25.837018 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:50250.service: Deactivated successfully. Jan 20 15:00:25.841516 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 15:00:25.845016 systemd-logind[1631]: Session 9 logged out. Waiting for processes to exit. Jan 20 15:00:25.849473 systemd-logind[1631]: Removed session 9. Jan 20 15:00:30.851514 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:50262.service - OpenSSH per-connection server daemon (10.0.0.1:50262). Jan 20 15:00:30.950590 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 50262 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:30.953007 sshd-session[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:30.962385 systemd-logind[1631]: New session 10 of user core. Jan 20 15:00:30.971495 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 15:00:31.149461 sshd[3974]: Connection closed by 10.0.0.1 port 50262 Jan 20 15:00:31.150350 sshd-session[3970]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:31.162940 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:50262.service: Deactivated successfully. Jan 20 15:00:31.166008 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 15:00:31.168365 systemd-logind[1631]: Session 10 logged out. Waiting for processes to exit. Jan 20 15:00:31.173440 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:50270.service - OpenSSH per-connection server daemon (10.0.0.1:50270). Jan 20 15:00:31.176005 systemd-logind[1631]: Removed session 10. Jan 20 15:00:31.256832 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 50270 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:31.260375 sshd-session[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:31.273196 systemd-logind[1631]: New session 11 of user core. Jan 20 15:00:31.282134 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 15:00:31.512516 sshd[3994]: Connection closed by 10.0.0.1 port 50270 Jan 20 15:00:31.514087 sshd-session[3990]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:31.529935 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:50270.service: Deactivated successfully. Jan 20 15:00:31.535608 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 15:00:31.541951 systemd-logind[1631]: Session 11 logged out. Waiting for processes to exit. Jan 20 15:00:31.549331 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:50284.service - OpenSSH per-connection server daemon (10.0.0.1:50284). Jan 20 15:00:31.552014 systemd-logind[1631]: Removed session 11. Jan 20 15:00:31.638414 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:31.641386 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:31.653283 systemd-logind[1631]: New session 12 of user core. Jan 20 15:00:31.661295 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 15:00:31.834331 sshd[4010]: Connection closed by 10.0.0.1 port 50284 Jan 20 15:00:31.835002 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:31.844811 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:50284.service: Deactivated successfully. Jan 20 15:00:31.849386 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 15:00:31.851507 systemd-logind[1631]: Session 12 logged out. Waiting for processes to exit. Jan 20 15:00:31.854887 systemd-logind[1631]: Removed session 12. Jan 20 15:00:35.923365 kubelet[2848]: E0120 15:00:35.920496 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:37.480336 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:39874.service - OpenSSH per-connection server daemon (10.0.0.1:39874). Jan 20 15:00:37.489959 kubelet[2848]: E0120 15:00:37.488921 2848 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.188s" Jan 20 15:00:37.729162 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 39874 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:37.733487 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:37.787364 systemd-logind[1631]: New session 13 of user core. Jan 20 15:00:37.795353 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 15:00:38.815440 sshd[4047]: Connection closed by 10.0.0.1 port 39874 Jan 20 15:00:38.819536 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:38.871626 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:39874.service: Deactivated successfully. Jan 20 15:00:38.882420 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 15:00:39.028241 systemd-logind[1631]: Session 13 logged out. Waiting for processes to exit. Jan 20 15:00:39.220055 systemd-logind[1631]: Removed session 13. Jan 20 15:00:39.383159 kubelet[2848]: E0120 15:00:39.382147 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:44.530533 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:35476.service - OpenSSH per-connection server daemon (10.0.0.1:35476). Jan 20 15:00:54.608998 systemd[1]: cri-containerd-c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d.scope: Deactivated successfully. Jan 20 15:00:54.769723 systemd[1]: cri-containerd-c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d.scope: Consumed 10.750s CPU time, 57.1M memory peak, 64K read from disk. Jan 20 15:00:55.100884 containerd[1676]: time="2026-01-20T15:00:55.096597187Z" level=info msg="received container exit event container_id:\"c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d\" id:\"c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d\" pid:2656 exit_status:1 exited_at:{seconds:1768921255 nanos:93265355}" Jan 20 15:00:55.117267 kubelet[2848]: E0120 15:00:55.117073 2848 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.811s" Jan 20 15:00:55.218610 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 35476 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:00:55.223545 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:00:55.238320 kubelet[2848]: E0120 15:00:55.236630 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:55.248814 kubelet[2848]: E0120 15:00:55.246514 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:55.256109 systemd[1]: cri-containerd-d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e.scope: Deactivated successfully. Jan 20 15:00:55.257830 systemd[1]: cri-containerd-d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e.scope: Consumed 5.010s CPU time, 19.2M memory peak. Jan 20 15:00:55.259077 systemd-logind[1631]: New session 14 of user core. Jan 20 15:00:55.265020 containerd[1676]: time="2026-01-20T15:00:55.264039853Z" level=info msg="received container exit event container_id:\"d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e\" id:\"d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e\" pid:2703 exit_status:1 exited_at:{seconds:1768921255 nanos:260181128}" Jan 20 15:00:55.269973 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 15:00:55.411983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d-rootfs.mount: Deactivated successfully. Jan 20 15:00:55.476926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e-rootfs.mount: Deactivated successfully. Jan 20 15:00:55.524956 sshd[4115]: Connection closed by 10.0.0.1 port 35476 Jan 20 15:00:55.525499 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Jan 20 15:00:55.534720 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:35476.service: Deactivated successfully. Jan 20 15:00:55.536142 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:35476.service: Consumed 2.004s CPU time, 4.2M memory peak. Jan 20 15:00:55.539510 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 15:00:55.542454 systemd-logind[1631]: Session 14 logged out. Waiting for processes to exit. Jan 20 15:00:55.545213 systemd-logind[1631]: Removed session 14. Jan 20 15:00:56.124968 kubelet[2848]: I0120 15:00:56.124477 2848 scope.go:117] "RemoveContainer" containerID="c93847a082a18abe3ccbb0658e3f828a629233e4b71ec2822bfbd5a34a20405d" Jan 20 15:00:56.124968 kubelet[2848]: E0120 15:00:56.124842 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:56.128623 kubelet[2848]: I0120 15:00:56.128517 2848 scope.go:117] "RemoveContainer" containerID="d84deb4a2bcc6a140d6c61254235d9ae1a96221c8ddb5de8e4e23668ecacf78e" Jan 20 15:00:56.128931 kubelet[2848]: E0120 15:00:56.128639 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:56.133021 containerd[1676]: time="2026-01-20T15:00:56.132368685Z" level=info msg="CreateContainer within sandbox \"1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 15:00:56.137082 containerd[1676]: time="2026-01-20T15:00:56.136911589Z" level=info msg="CreateContainer within sandbox \"d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 15:00:56.171251 containerd[1676]: time="2026-01-20T15:00:56.171083071Z" level=info msg="Container ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:00:56.174967 containerd[1676]: time="2026-01-20T15:00:56.174061892Z" level=info msg="Container da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:00:56.175244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938844580.mount: Deactivated successfully. Jan 20 15:00:56.191435 containerd[1676]: time="2026-01-20T15:00:56.191166286Z" level=info msg="CreateContainer within sandbox \"d1596f8d15baf7b59ad4285d3629e1506c6284fcc506fafdaad540c5236f2229\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b\"" Jan 20 15:00:56.194596 containerd[1676]: time="2026-01-20T15:00:56.194272624Z" level=info msg="StartContainer for \"ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b\"" Jan 20 15:00:56.197421 containerd[1676]: time="2026-01-20T15:00:56.197391697Z" level=info msg="connecting to shim ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b" address="unix:///run/containerd/s/99a98feb2726aea94177ef0641c47dc08619da8f6de5e100275e2eca8b952812" protocol=ttrpc version=3 Jan 20 15:00:56.204092 containerd[1676]: time="2026-01-20T15:00:56.203915471Z" level=info msg="CreateContainer within sandbox \"1474fae6a9ce0a752ac81e84be56dbf5fbf040e1f22b61d5a1118457a6eabd87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f\"" Jan 20 15:00:56.206053 containerd[1676]: time="2026-01-20T15:00:56.205936609Z" level=info msg="StartContainer for \"da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f\"" Jan 20 15:00:56.209987 containerd[1676]: time="2026-01-20T15:00:56.209937364Z" level=info msg="connecting to shim da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f" address="unix:///run/containerd/s/c79fc53e6cf9e70c0fd61b5d8f8bf78c0893bdacd34268b2fa126c7d16598279" protocol=ttrpc version=3 Jan 20 15:00:56.334061 kubelet[2848]: E0120 15:00:56.333815 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:56.361439 systemd[1]: Started cri-containerd-ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b.scope - libcontainer container ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b. Jan 20 15:00:56.377237 systemd[1]: Started cri-containerd-da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f.scope - libcontainer container da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f. Jan 20 15:00:56.499175 containerd[1676]: time="2026-01-20T15:00:56.499010486Z" level=info msg="StartContainer for \"ffe45ec03b47920ea2224e5d2e8a06480c9f8ae4e411ede3ae16aa8cebd2bc1b\" returns successfully" Jan 20 15:00:56.562579 containerd[1676]: time="2026-01-20T15:00:56.562257533Z" level=info msg="StartContainer for \"da5d2fe2c1333e094914cc9690fd15d63c271e0731a73aeaedc3847d050d2e1f\" returns successfully" Jan 20 15:00:57.159492 kubelet[2848]: E0120 15:00:57.159294 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:57.165398 kubelet[2848]: E0120 15:00:57.165313 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:58.171503 kubelet[2848]: E0120 15:00:58.171383 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:59.175099 kubelet[2848]: E0120 15:00:59.174961 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:00:59.289146 kubelet[2848]: E0120 15:00:59.288891 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:00.553742 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:41454.service - OpenSSH per-connection server daemon (10.0.0.1:41454). Jan 20 15:01:00.648501 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 41454 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:00.650984 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:00.658072 systemd-logind[1631]: New session 15 of user core. Jan 20 15:01:00.669058 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 15:01:00.795028 sshd[4243]: Connection closed by 10.0.0.1 port 41454 Jan 20 15:01:00.795407 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:00.801094 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:41454.service: Deactivated successfully. Jan 20 15:01:00.804060 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 15:01:00.805579 systemd-logind[1631]: Session 15 logged out. Waiting for processes to exit. Jan 20 15:01:00.807490 systemd-logind[1631]: Removed session 15. Jan 20 15:01:01.027026 kubelet[2848]: E0120 15:01:01.026870 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:05.825583 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:36104.service - OpenSSH per-connection server daemon (10.0.0.1:36104). Jan 20 15:01:05.904526 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 36104 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:05.908033 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:05.918054 systemd-logind[1631]: New session 16 of user core. Jan 20 15:01:05.931031 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 15:01:06.069314 sshd[4283]: Connection closed by 10.0.0.1 port 36104 Jan 20 15:01:06.070114 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:06.076237 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:36104.service: Deactivated successfully. Jan 20 15:01:06.079597 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 15:01:06.081926 systemd-logind[1631]: Session 16 logged out. Waiting for processes to exit. Jan 20 15:01:06.084503 systemd-logind[1631]: Removed session 16. Jan 20 15:01:06.191625 kubelet[2848]: E0120 15:01:06.191466 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:06.689100 kubelet[2848]: E0120 15:01:06.688829 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:07.288752 kubelet[2848]: E0120 15:01:07.288499 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:11.034344 kubelet[2848]: E0120 15:01:11.033960 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:11.091876 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Jan 20 15:01:11.159759 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:11.162255 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:11.171224 systemd-logind[1631]: New session 17 of user core. Jan 20 15:01:11.179116 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 15:01:11.292395 sshd[4320]: Connection closed by 10.0.0.1 port 36110 Jan 20 15:01:11.293011 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:11.300777 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:36110.service: Deactivated successfully. Jan 20 15:01:11.304443 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 15:01:11.306526 systemd-logind[1631]: Session 17 logged out. Waiting for processes to exit. Jan 20 15:01:11.309426 systemd-logind[1631]: Removed session 17. Jan 20 15:01:11.703579 kubelet[2848]: E0120 15:01:11.703464 2848 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:01:16.313310 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:46484.service - OpenSSH per-connection server daemon (10.0.0.1:46484). Jan 20 15:01:16.407195 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 46484 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:16.410160 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:16.418196 systemd-logind[1631]: New session 18 of user core. Jan 20 15:01:16.429026 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 15:01:16.540943 sshd[4358]: Connection closed by 10.0.0.1 port 46484 Jan 20 15:01:16.541468 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:16.548873 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:46484.service: Deactivated successfully. Jan 20 15:01:16.551607 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 15:01:16.553488 systemd-logind[1631]: Session 18 logged out. Waiting for processes to exit. Jan 20 15:01:16.555338 systemd-logind[1631]: Removed session 18. Jan 20 15:01:21.559541 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:46496.service - OpenSSH per-connection server daemon (10.0.0.1:46496). Jan 20 15:01:21.640260 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 46496 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:21.643045 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:21.652189 systemd-logind[1631]: New session 19 of user core. Jan 20 15:01:21.665001 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 15:01:21.803468 sshd[4395]: Connection closed by 10.0.0.1 port 46496 Jan 20 15:01:21.804298 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:21.819547 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:46496.service: Deactivated successfully. Jan 20 15:01:21.823233 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 15:01:21.825510 systemd-logind[1631]: Session 19 logged out. Waiting for processes to exit. Jan 20 15:01:21.830050 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:46512.service - OpenSSH per-connection server daemon (10.0.0.1:46512). Jan 20 15:01:21.831867 systemd-logind[1631]: Removed session 19. Jan 20 15:01:21.918370 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 46512 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:21.921485 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:21.930321 systemd-logind[1631]: New session 20 of user core. Jan 20 15:01:21.939025 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 15:01:22.304140 sshd[4413]: Connection closed by 10.0.0.1 port 46512 Jan 20 15:01:22.304868 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:22.335899 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:46512.service: Deactivated successfully. Jan 20 15:01:22.338605 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 15:01:22.340226 systemd-logind[1631]: Session 20 logged out. Waiting for processes to exit. Jan 20 15:01:22.344961 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:46522.service - OpenSSH per-connection server daemon (10.0.0.1:46522). Jan 20 15:01:22.346302 systemd-logind[1631]: Removed session 20. Jan 20 15:01:22.452944 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 46522 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:22.455754 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:22.463994 systemd-logind[1631]: New session 21 of user core. Jan 20 15:01:22.473026 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 15:01:23.206444 sshd[4429]: Connection closed by 10.0.0.1 port 46522 Jan 20 15:01:23.207076 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:23.217886 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:46522.service: Deactivated successfully. Jan 20 15:01:23.222135 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 15:01:23.224397 systemd-logind[1631]: Session 21 logged out. Waiting for processes to exit. Jan 20 15:01:23.231923 systemd-logind[1631]: Removed session 21. Jan 20 15:01:23.235296 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:58574.service - OpenSSH per-connection server daemon (10.0.0.1:58574). Jan 20 15:01:23.324232 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 58574 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:23.327198 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:23.336438 systemd-logind[1631]: New session 22 of user core. Jan 20 15:01:23.342924 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 15:01:23.611406 sshd[4452]: Connection closed by 10.0.0.1 port 58574 Jan 20 15:01:23.614254 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:23.630203 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:58586.service - OpenSSH per-connection server daemon (10.0.0.1:58586). Jan 20 15:01:23.631309 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:58574.service: Deactivated successfully. Jan 20 15:01:23.637223 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 15:01:23.644895 systemd-logind[1631]: Session 22 logged out. Waiting for processes to exit. Jan 20 15:01:23.646627 systemd-logind[1631]: Removed session 22. Jan 20 15:01:23.721099 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 58586 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:23.723468 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:23.735595 systemd-logind[1631]: New session 23 of user core. Jan 20 15:01:23.753374 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 15:01:23.896234 sshd[4468]: Connection closed by 10.0.0.1 port 58586 Jan 20 15:01:23.896509 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:23.905234 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:58586.service: Deactivated successfully. Jan 20 15:01:23.908483 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 15:01:23.910896 systemd-logind[1631]: Session 23 logged out. Waiting for processes to exit. Jan 20 15:01:23.913426 systemd-logind[1631]: Removed session 23. Jan 20 15:01:27.152060 kernel: clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large: Jan 20 15:01:27.152523 kernel: clocksource: 'kvm-clock' wd_nsec: 565832699 wd_now: 3772bc4ea5 wd_last: 37510262aa mask: ffffffffffffffff Jan 20 15:01:27.152555 kernel: clocksource: 'tsc' cs_nsec: 566508618 cs_now: 879af9e9a5 cs_last: 874867173e mask: ffffffffffffffff Jan 20 15:01:27.152577 kernel: clocksource: Clocksource 'tsc' skewed 675919 ns (0 ms) over watchdog 'kvm-clock' interval of 565832699 ns (565 ms) Jan 20 15:01:27.152601 kernel: clocksource: 'kvm-clock' (not 'tsc') is current clocksource. Jan 20 15:01:27.152917 kernel: tsc: Marking TSC unstable due to clocksource watchdog Jan 20 15:01:28.943047 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:58600.service - OpenSSH per-connection server daemon (10.0.0.1:58600). Jan 20 15:01:29.049763 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 58600 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:29.053405 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:29.063290 systemd-logind[1631]: New session 24 of user core. Jan 20 15:01:29.080232 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 15:01:29.221066 sshd[4511]: Connection closed by 10.0.0.1 port 58600 Jan 20 15:01:29.221266 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:29.229084 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:58600.service: Deactivated successfully. Jan 20 15:01:29.232142 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 15:01:29.234745 systemd-logind[1631]: Session 24 logged out. Waiting for processes to exit. Jan 20 15:01:29.236986 systemd-logind[1631]: Removed session 24. Jan 20 15:01:34.263180 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:39958.service - OpenSSH per-connection server daemon (10.0.0.1:39958). Jan 20 15:01:34.368540 sshd[4546]: Accepted publickey for core from 10.0.0.1 port 39958 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:34.374118 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:34.384786 systemd-logind[1631]: New session 25 of user core. Jan 20 15:01:34.395208 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 15:01:34.568893 sshd[4556]: Connection closed by 10.0.0.1 port 39958 Jan 20 15:01:34.569316 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:34.576409 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:39958.service: Deactivated successfully. Jan 20 15:01:34.581057 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 15:01:34.585069 systemd-logind[1631]: Session 25 logged out. Waiting for processes to exit. Jan 20 15:01:34.588245 systemd-logind[1631]: Removed session 25. Jan 20 15:01:39.591103 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:39960.service - OpenSSH per-connection server daemon (10.0.0.1:39960). Jan 20 15:01:39.701121 sshd[4589]: Accepted publickey for core from 10.0.0.1 port 39960 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:39.704183 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:39.713915 systemd-logind[1631]: New session 26 of user core. Jan 20 15:01:39.724133 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 15:01:39.917171 sshd[4593]: Connection closed by 10.0.0.1 port 39960 Jan 20 15:01:39.917587 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:39.926123 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:39960.service: Deactivated successfully. Jan 20 15:01:39.929260 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 15:01:39.933137 systemd-logind[1631]: Session 26 logged out. Waiting for processes to exit. Jan 20 15:01:39.935537 systemd-logind[1631]: Removed session 26. Jan 20 15:01:44.973288 systemd[1]: Started sshd@25-10.0.0.64:22-10.0.0.1:34102.service - OpenSSH per-connection server daemon (10.0.0.1:34102). Jan 20 15:01:45.090499 sshd[4628]: Accepted publickey for core from 10.0.0.1 port 34102 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:45.093592 sshd-session[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:45.108133 systemd-logind[1631]: New session 27 of user core. Jan 20 15:01:45.121095 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 15:01:45.373811 sshd[4632]: Connection closed by 10.0.0.1 port 34102 Jan 20 15:01:45.374475 sshd-session[4628]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:45.382811 systemd[1]: sshd@25-10.0.0.64:22-10.0.0.1:34102.service: Deactivated successfully. Jan 20 15:01:45.386597 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 15:01:45.390387 systemd-logind[1631]: Session 27 logged out. Waiting for processes to exit. Jan 20 15:01:45.394118 systemd-logind[1631]: Removed session 27. Jan 20 15:01:50.393000 systemd[1]: Started sshd@26-10.0.0.64:22-10.0.0.1:34110.service - OpenSSH per-connection server daemon (10.0.0.1:34110). Jan 20 15:01:50.496922 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 34110 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:50.500141 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:50.512323 systemd-logind[1631]: New session 28 of user core. Jan 20 15:01:50.523191 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 15:01:50.699115 sshd[4670]: Connection closed by 10.0.0.1 port 34110 Jan 20 15:01:50.699924 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:50.708204 systemd[1]: sshd@26-10.0.0.64:22-10.0.0.1:34110.service: Deactivated successfully. Jan 20 15:01:50.712910 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 15:01:50.714977 systemd-logind[1631]: Session 28 logged out. Waiting for processes to exit. Jan 20 15:01:50.718158 systemd-logind[1631]: Removed session 28. Jan 20 15:01:55.734637 systemd[1]: Started sshd@27-10.0.0.64:22-10.0.0.1:59974.service - OpenSSH per-connection server daemon (10.0.0.1:59974). Jan 20 15:01:55.879488 sshd[4718]: Accepted publickey for core from 10.0.0.1 port 59974 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:01:55.886567 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:01:55.907073 systemd-logind[1631]: New session 29 of user core. Jan 20 15:01:55.917009 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 15:01:56.123943 sshd[4723]: Connection closed by 10.0.0.1 port 59974 Jan 20 15:01:56.126257 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Jan 20 15:01:56.138600 systemd[1]: sshd@27-10.0.0.64:22-10.0.0.1:59974.service: Deactivated successfully. Jan 20 15:01:56.144968 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 15:01:56.147270 systemd-logind[1631]: Session 29 logged out. Waiting for processes to exit. Jan 20 15:01:56.150277 systemd-logind[1631]: Removed session 29.