Jan 23 18:41:39.580161 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:41:39.580188 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:41:39.580198 kernel: BIOS-provided physical RAM map: Jan 23 18:41:39.580207 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:41:39.580213 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:41:39.580219 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:41:39.580226 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:41:39.580232 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:41:39.580342 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:41:39.580349 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:41:39.580355 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 18:41:39.580364 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 18:41:39.580370 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 18:41:39.580377 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 18:41:39.580384 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 18:41:39.580391 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 18:41:39.580400 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 18:41:39.580406 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 18:41:39.580413 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 18:41:39.580419 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 18:41:39.580426 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 18:41:39.580433 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 18:41:39.580439 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:41:39.580446 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:41:39.580452 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:41:39.580459 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:41:39.580468 kernel: NX (Execute Disable) protection: active Jan 23 18:41:39.580474 kernel: APIC: Static calls initialized Jan 23 18:41:39.580481 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 18:41:39.580488 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 18:41:39.580494 kernel: extended physical RAM map: Jan 23 18:41:39.580501 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:41:39.580508 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:41:39.580514 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:41:39.580521 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:41:39.580528 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:41:39.580534 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:41:39.580543 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:41:39.580549 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 18:41:39.580556 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 18:41:39.580566 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 18:41:39.580575 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 18:41:39.580582 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 18:41:39.580589 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 18:41:39.580596 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 18:41:39.580603 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 18:41:39.580610 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 18:41:39.580618 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 18:41:39.580625 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 18:41:39.580631 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 18:41:39.580640 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 18:41:39.580647 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 18:41:39.580654 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 18:41:39.580661 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 18:41:39.580668 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:41:39.580675 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:41:39.580682 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:41:39.580689 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:41:39.580696 kernel: efi: EFI v2.7 by EDK II Jan 23 18:41:39.580703 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 18:41:39.580710 kernel: random: crng init done Jan 23 18:41:39.580720 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:41:39.580727 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:41:39.580733 kernel: secureboot: Secure boot disabled Jan 23 18:41:39.580741 kernel: SMBIOS 2.8 present. Jan 23 18:41:39.580748 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 18:41:39.580755 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:41:39.580761 kernel: Hypervisor detected: KVM Jan 23 18:41:39.580768 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 18:41:39.580775 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:41:39.580782 kernel: kvm-clock: using sched offset of 11221456152 cycles Jan 23 18:41:39.580790 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:41:39.580800 kernel: tsc: Detected 2445.426 MHz processor Jan 23 18:41:39.580807 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:41:39.580815 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:41:39.580822 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 18:41:39.580829 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:41:39.580837 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:41:39.580844 kernel: Using GB pages for direct mapping Jan 23 18:41:39.580853 kernel: ACPI: Early table checksum verification disabled Jan 23 18:41:39.580861 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 18:41:39.580868 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:41:39.580875 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:41:39.580882 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:41:39.580889 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 18:41:39.580897 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:41:39.580906 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:41:39.580913 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:41:39.581098 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:41:39.581105 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:41:39.581113 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 18:41:39.581120 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 18:41:39.581128 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 18:41:39.581138 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 18:41:39.581145 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 18:41:39.581152 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 18:41:39.581159 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 18:41:39.581166 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 18:41:39.581173 kernel: No NUMA configuration found Jan 23 18:41:39.581180 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 18:41:39.581187 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 18:41:39.581197 kernel: Zone ranges: Jan 23 18:41:39.581204 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:41:39.581211 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 18:41:39.581218 kernel: Normal empty Jan 23 18:41:39.581225 kernel: Device empty Jan 23 18:41:39.581232 kernel: Movable zone start for each node Jan 23 18:41:39.581334 kernel: Early memory node ranges Jan 23 18:41:39.581344 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:41:39.581351 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 18:41:39.581359 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 18:41:39.581366 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 18:41:39.581373 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 18:41:39.581380 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 18:41:39.581387 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 18:41:39.581395 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 18:41:39.584606 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 18:41:39.584615 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:41:39.584629 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:41:39.584639 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 18:41:39.584646 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:41:39.584654 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 18:41:39.584661 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:41:39.584669 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:41:39.584676 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 18:41:39.584686 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 18:41:39.584693 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:41:39.584701 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:41:39.584708 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:41:39.584718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:41:39.584725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:41:39.584733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:41:39.584740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:41:39.584747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:41:39.584755 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:41:39.584762 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:41:39.584770 kernel: TSC deadline timer available Jan 23 18:41:39.584779 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:41:39.584787 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:41:39.584794 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:41:39.584801 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:41:39.584808 kernel: CPU topo: Num. cores per package: 4 Jan 23 18:41:39.584815 kernel: CPU topo: Num. threads per package: 4 Jan 23 18:41:39.584823 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 18:41:39.584831 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:41:39.584840 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:41:39.584848 kernel: kvm-guest: setup PV sched yield Jan 23 18:41:39.584855 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 18:41:39.584863 kernel: Booting paravirtualized kernel on KVM Jan 23 18:41:39.584870 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:41:39.584878 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 18:41:39.584886 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 18:41:39.584895 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 18:41:39.584903 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 18:41:39.584911 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:41:39.585093 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:41:39.585103 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:41:39.585111 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:41:39.585122 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:41:39.585129 kernel: Fallback order for Node 0: 0 Jan 23 18:41:39.585137 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 18:41:39.585144 kernel: Policy zone: DMA32 Jan 23 18:41:39.585152 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:41:39.585159 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 18:41:39.585167 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:41:39.585176 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:41:39.585184 kernel: Dynamic Preempt: voluntary Jan 23 18:41:39.585191 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:41:39.585199 kernel: rcu: RCU event tracing is enabled. Jan 23 18:41:39.585207 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 18:41:39.585215 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:41:39.585223 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:41:39.585230 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:41:39.585342 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:41:39.585350 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 18:41:39.585358 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:41:39.585366 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:41:39.585374 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:41:39.585381 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 18:41:39.585389 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:41:39.585398 kernel: Console: colour dummy device 80x25 Jan 23 18:41:39.585406 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:41:39.585413 kernel: ACPI: Core revision 20240827 Jan 23 18:41:39.585421 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:41:39.585428 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:41:39.585436 kernel: x2apic enabled Jan 23 18:41:39.585444 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:41:39.585451 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:41:39.585461 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:41:39.585469 kernel: kvm-guest: setup PV IPIs Jan 23 18:41:39.585477 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:41:39.585484 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:41:39.585492 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 18:41:39.585499 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:41:39.585507 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:41:39.585517 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:41:39.585524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:41:39.585532 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:41:39.585539 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:41:39.585547 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:41:39.585554 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:41:39.585565 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:41:39.585572 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:41:39.585580 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:41:39.585588 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:41:39.585595 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:41:39.585603 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:41:39.585610 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:41:39.585620 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:41:39.585627 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:41:39.585635 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 18:41:39.585642 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:41:39.585650 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:41:39.585657 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:41:39.585665 kernel: landlock: Up and running. Jan 23 18:41:39.585674 kernel: SELinux: Initializing. Jan 23 18:41:39.585682 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:41:39.585689 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:41:39.585697 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:41:39.585705 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 18:41:39.585712 kernel: signal: max sigframe size: 1776 Jan 23 18:41:39.585720 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:41:39.585729 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:41:39.585737 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:41:39.585744 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:41:39.585752 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:41:39.585759 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:41:39.585767 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 18:41:39.585774 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 18:41:39.585782 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 18:41:39.585792 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120812K reserved, 0K cma-reserved) Jan 23 18:41:39.585799 kernel: devtmpfs: initialized Jan 23 18:41:39.585807 kernel: x86/mm: Memory block size: 128MB Jan 23 18:41:39.585815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 18:41:39.585822 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 18:41:39.585829 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 18:41:39.585839 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 18:41:39.585846 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 18:41:39.585854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 18:41:39.585861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:41:39.585869 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 18:41:39.585877 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:41:39.585884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:41:39.585894 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:41:39.585901 kernel: audit: type=2000 audit(1769193685.875:1): state=initialized audit_enabled=0 res=1 Jan 23 18:41:39.585909 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:41:39.586090 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:41:39.586099 kernel: cpuidle: using governor menu Jan 23 18:41:39.586106 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:41:39.586114 kernel: dca service started, version 1.12.1 Jan 23 18:41:39.586122 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:41:39.586132 kernel: PCI: Using configuration type 1 for base access Jan 23 18:41:39.586140 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:41:39.586147 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:41:39.586155 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:41:39.586162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:41:39.586170 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:41:39.586177 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:41:39.586187 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:41:39.586194 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:41:39.586202 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:41:39.586209 kernel: ACPI: Interpreter enabled Jan 23 18:41:39.586217 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:41:39.586224 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:41:39.586232 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:41:39.586345 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:41:39.586353 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:41:39.586360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:41:39.586719 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:41:39.586911 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:41:39.587631 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:41:39.587646 kernel: PCI host bridge to bus 0000:00 Jan 23 18:41:39.588523 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:41:39.588692 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:41:39.588850 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:41:39.589689 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 18:41:39.589858 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:41:39.590409 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 18:41:39.590569 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:41:39.590762 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:41:39.591136 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:41:39.591429 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 18:41:39.591605 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 18:41:39.591772 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:41:39.592444 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:41:39.592615 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 14648 usecs Jan 23 18:41:39.592792 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:41:39.593232 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 18:41:39.593518 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 18:41:39.593688 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 18:41:39.593867 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:41:39.594229 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 18:41:39.594515 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 18:41:39.594688 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 18:41:39.594864 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:41:39.595223 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 18:41:39.595511 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 18:41:39.595682 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 18:41:39.595850 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 18:41:39.596222 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:41:39.596503 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:41:39.596674 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 14648 usecs Jan 23 18:41:39.596850 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:41:39.597210 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 18:41:39.597492 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 18:41:39.597671 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:41:39.597838 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 18:41:39.597850 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:41:39.597859 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:41:39.597866 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:41:39.597878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:41:39.597885 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:41:39.597893 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:41:39.597901 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:41:39.597908 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:41:39.598097 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:41:39.598107 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:41:39.598115 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:41:39.598126 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:41:39.598133 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:41:39.598141 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:41:39.598148 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:41:39.598156 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:41:39.598163 kernel: iommu: Default domain type: Translated Jan 23 18:41:39.598171 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:41:39.598180 kernel: efivars: Registered efivars operations Jan 23 18:41:39.598188 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:41:39.598196 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:41:39.598204 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 18:41:39.598212 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 18:41:39.598219 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 18:41:39.598227 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 18:41:39.598332 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 18:41:39.598341 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 18:41:39.598348 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 18:41:39.598356 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 18:41:39.598533 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:41:39.598699 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:41:39.598872 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:41:39.598882 kernel: vgaarb: loaded Jan 23 18:41:39.598890 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:41:39.598898 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:41:39.598906 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:41:39.598913 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:41:39.599108 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:41:39.599116 kernel: pnp: PnP ACPI init Jan 23 18:41:39.599421 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:41:39.599435 kernel: pnp: PnP ACPI: found 6 devices Jan 23 18:41:39.599444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:41:39.599451 kernel: NET: Registered PF_INET protocol family Jan 23 18:41:39.599459 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:41:39.599467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:41:39.599490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:41:39.599499 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:41:39.599507 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:41:39.599515 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:41:39.599523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:41:39.599531 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:41:39.599539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:41:39.599549 kernel: NET: Registered PF_XDP protocol family Jan 23 18:41:39.599719 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:41:39.599887 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 18:41:39.600338 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:41:39.600503 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:41:39.600659 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:41:39.600820 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 18:41:39.601338 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:41:39.601502 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 18:41:39.601513 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:41:39.601521 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:41:39.601529 kernel: Initialise system trusted keyrings Jan 23 18:41:39.601537 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:41:39.601549 kernel: Key type asymmetric registered Jan 23 18:41:39.601557 kernel: Asymmetric key parser 'x509' registered Jan 23 18:41:39.601565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:41:39.601574 kernel: io scheduler mq-deadline registered Jan 23 18:41:39.601582 kernel: io scheduler kyber registered Jan 23 18:41:39.601590 kernel: io scheduler bfq registered Jan 23 18:41:39.601598 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:41:39.601608 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:41:39.601618 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:41:39.601626 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:41:39.601634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:41:39.601642 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:41:39.601652 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:41:39.601660 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:41:39.601667 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:41:39.601841 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 18:41:39.601852 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:41:39.602211 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 18:41:39.602490 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T18:41:34 UTC (1769193694) Jan 23 18:41:39.602659 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 18:41:39.602669 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:41:39.602678 kernel: efifb: probing for efifb Jan 23 18:41:39.602685 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 18:41:39.602693 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:41:39.602701 kernel: efifb: scrolling: redraw Jan 23 18:41:39.602712 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:41:39.602720 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:41:39.602727 kernel: fb0: EFI VGA frame buffer device Jan 23 18:41:39.602735 kernel: pstore: Using crash dump compression: deflate Jan 23 18:41:39.602743 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:41:39.602751 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:41:39.602758 kernel: Segment Routing with IPv6 Jan 23 18:41:39.602768 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:41:39.602776 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:41:39.602783 kernel: Key type dns_resolver registered Jan 23 18:41:39.602791 kernel: IPI shorthand broadcast: enabled Jan 23 18:41:39.602799 kernel: sched_clock: Marking stable (5861080177, 3127608437)->(10242950506, -1254261892) Jan 23 18:41:39.602807 kernel: registered taskstats version 1 Jan 23 18:41:39.602815 kernel: Loading compiled-in X.509 certificates Jan 23 18:41:39.602823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:41:39.602833 kernel: Demotion targets for Node 0: null Jan 23 18:41:39.602840 kernel: Key type .fscrypt registered Jan 23 18:41:39.602849 kernel: Key type fscrypt-provisioning registered Jan 23 18:41:39.602857 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:41:39.602864 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:41:39.602872 kernel: ima: No architecture policies found Jan 23 18:41:39.602882 kernel: clk: Disabling unused clocks Jan 23 18:41:39.602889 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:41:39.602897 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:41:39.602905 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:41:39.602913 kernel: Run /init as init process Jan 23 18:41:39.603115 kernel: with arguments: Jan 23 18:41:39.603124 kernel: /init Jan 23 18:41:39.603132 kernel: with environment: Jan 23 18:41:39.603142 kernel: HOME=/ Jan 23 18:41:39.603150 kernel: TERM=linux Jan 23 18:41:39.603158 kernel: SCSI subsystem initialized Jan 23 18:41:39.603166 kernel: libata version 3.00 loaded. Jan 23 18:41:39.603446 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:41:39.603459 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:41:39.603626 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:41:39.603800 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:41:39.604159 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:41:39.604471 kernel: scsi host0: ahci Jan 23 18:41:39.604657 kernel: scsi host1: ahci Jan 23 18:41:39.604838 kernel: scsi host2: ahci Jan 23 18:41:39.605220 kernel: scsi host3: ahci Jan 23 18:41:39.605515 kernel: scsi host4: ahci Jan 23 18:41:39.605697 kernel: scsi host5: ahci Jan 23 18:41:39.605709 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 23 18:41:39.605721 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 23 18:41:39.605729 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 23 18:41:39.605740 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 23 18:41:39.605748 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 23 18:41:39.605755 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 23 18:41:39.605763 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:41:39.605771 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:41:39.605779 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:41:39.605787 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:41:39.605797 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:41:39.605805 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:41:39.605813 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:41:39.605821 kernel: ata3.00: applying bridge limits Jan 23 18:41:39.605830 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:41:39.605838 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:41:39.605845 kernel: ata3.00: configured for UDMA/100 Jan 23 18:41:39.606397 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:41:39.606595 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 18:41:39.606784 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:41:39.607148 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 23 18:41:39.607161 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:41:39.607169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:41:39.607181 kernel: GPT:16515071 != 27000831 Jan 23 18:41:39.607189 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:41:39.607197 kernel: GPT:16515071 != 27000831 Jan 23 18:41:39.607205 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:41:39.607213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:41:39.607513 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:41:39.607525 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:41:39.607538 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:41:39.607546 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:41:39.607554 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:41:39.607563 kernel: raid6: avx2x4 gen() 18165 MB/s Jan 23 18:41:39.607570 kernel: raid6: avx2x2 gen() 30597 MB/s Jan 23 18:41:39.607578 kernel: raid6: avx2x1 gen() 22045 MB/s Jan 23 18:41:39.607586 kernel: raid6: using algorithm avx2x2 gen() 30597 MB/s Jan 23 18:41:39.607596 kernel: raid6: .... xor() 22251 MB/s, rmw enabled Jan 23 18:41:39.607604 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:41:39.607612 kernel: xor: automatically using best checksumming function avx Jan 23 18:41:39.607620 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:41:39.607628 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 23 18:41:39.607636 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:41:39.607645 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:41:39.607655 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:41:39.607662 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:41:39.607670 kernel: loop: module loaded Jan 23 18:41:39.607678 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:41:39.607686 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:41:39.607695 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:41:39.607706 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:41:39.607717 systemd[1]: Detected virtualization kvm. Jan 23 18:41:39.607725 systemd[1]: Detected architecture x86-64. Jan 23 18:41:39.607733 systemd[1]: Running in initrd. Jan 23 18:41:39.607741 systemd[1]: No hostname configured, using default hostname. Jan 23 18:41:39.607750 systemd[1]: Hostname set to . Jan 23 18:41:39.607758 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:41:39.607768 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:41:39.607777 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:41:39.607785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:41:39.607795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:41:39.607804 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:41:39.607813 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:41:39.607824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:41:39.607833 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:41:39.607841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:41:39.607849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:41:39.607858 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:41:39.607866 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:41:39.607877 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:41:39.607885 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:41:39.607893 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:41:39.607902 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:41:39.607910 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:41:39.608110 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:41:39.608120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:41:39.608131 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:41:39.608140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:41:39.608148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:41:39.608156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:41:39.608164 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:41:39.608173 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:41:39.608183 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:41:39.608191 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:41:39.608200 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:41:39.608209 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:41:39.608217 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:41:39.608225 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:41:39.608233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:41:39.608345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:41:39.608354 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:41:39.608362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:41:39.608399 systemd-journald[320]: Collecting audit messages is enabled. Jan 23 18:41:39.608427 kernel: audit: type=1130 audit(1769193699.586:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.608436 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:41:39.608446 systemd-journald[320]: Journal started Jan 23 18:41:39.608464 systemd-journald[320]: Runtime Journal (/run/log/journal/1a33a009fe36441b9af836476b3262d7) is 6M, max 48M, 42M free. Jan 23 18:41:39.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.658141 kernel: audit: type=1130 audit(1769193699.625:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.658179 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:41:39.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.804345 kernel: audit: type=1130 audit(1769193699.774:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.867756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:41:39.905853 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:41:39.905877 kernel: audit: type=1130 audit(1769193699.905:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:39.915585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:41:39.968812 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:41:40.003617 kernel: Bridge firewalling registered Jan 23 18:41:39.984812 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 23 18:41:40.022612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:41:40.041790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:41:40.098205 kernel: audit: type=1130 audit(1769193700.054:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.088902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:41:40.159834 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:41:40.176631 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:41:40.229902 kernel: audit: type=1130 audit(1769193700.177:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.177682 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:41:40.283858 kernel: audit: type=1130 audit(1769193700.243:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.284695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:41:40.339561 kernel: audit: type=1130 audit(1769193700.285:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.325428 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:41:40.376536 kernel: audit: type=1334 audit(1769193700.341:10): prog-id=6 op=LOAD Jan 23 18:41:40.341000 audit: BPF prog-id=6 op=LOAD Jan 23 18:41:40.347342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:41:40.379207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:41:40.440557 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:41:40.498587 kernel: audit: type=1130 audit(1769193700.456:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.500462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:41:40.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.532748 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:41:40.676826 systemd-resolved[353]: Positive Trust Anchors: Jan 23 18:41:40.677173 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:41:40.677178 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:41:40.677205 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:41:40.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:40.709555 systemd-resolved[353]: Defaulting to hostname 'linux'. Jan 23 18:41:40.711373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:41:40.725455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:41:41.060409 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:41:41.104153 kernel: iscsi: registered transport (tcp) Jan 23 18:41:41.194443 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:41:41.194628 kernel: QLogic iSCSI HBA Driver Jan 23 18:41:41.291614 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:41:41.373863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:41:41.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:41.407367 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:41:41.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:41.554901 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:41:41.571638 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:41:41.612852 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:41:41.742672 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:41:41.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:41.769000 audit: BPF prog-id=7 op=LOAD Jan 23 18:41:41.769000 audit: BPF prog-id=8 op=LOAD Jan 23 18:41:41.771206 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:41:41.875381 systemd-udevd[597]: Using default interface naming scheme 'v257'. Jan 23 18:41:41.910412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:41:41.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:41.945770 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:41:42.043426 dracut-pre-trigger[643]: rd.md=0: removing MD RAID activation Jan 23 18:41:42.151631 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:41:42.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:42.152000 audit: BPF prog-id=9 op=LOAD Jan 23 18:41:42.155878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:41:42.174690 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:41:42.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:42.197214 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:41:42.398427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:41:42.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:42.441243 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:41:42.777764 systemd-networkd[727]: lo: Link UP Jan 23 18:41:42.777778 systemd-networkd[727]: lo: Gained carrier Jan 23 18:41:42.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:42.781405 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:41:42.787514 systemd[1]: Reached target network.target - Network. Jan 23 18:41:42.941353 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:41:43.001659 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:41:43.111110 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:41:43.112484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:41:43.182595 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:41:43.256222 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:41:43.256235 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:41:43.259848 systemd-networkd[727]: eth0: Link UP Jan 23 18:41:43.260806 systemd-networkd[727]: eth0: Gained carrier Jan 23 18:41:43.260817 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:41:43.293219 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:41:43.401756 kernel: AES CTR mode by8 optimization enabled Jan 23 18:41:43.401788 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:41:43.402701 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:41:43.421830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:41:43.432206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:41:43.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:43.466200 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:41:43.481382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:41:43.542638 disk-uuid[827]: Primary Header is updated. Jan 23 18:41:43.542638 disk-uuid[827]: Secondary Entries is updated. Jan 23 18:41:43.542638 disk-uuid[827]: Secondary Header is updated. Jan 23 18:41:43.618222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:41:43.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:43.709798 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:41:43.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:43.732794 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:41:43.769748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:41:43.770389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:41:43.825453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:41:43.912502 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:41:43.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:44.689869 disk-uuid[834]: Warning: The kernel is still using the old partition table. Jan 23 18:41:44.689869 disk-uuid[834]: The new table will be used at the next reboot or after you Jan 23 18:41:44.689869 disk-uuid[834]: run partprobe(8) or kpartx(8) Jan 23 18:41:44.689869 disk-uuid[834]: The operation has completed successfully. Jan 23 18:41:44.763244 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:41:44.763559 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:41:44.861487 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 23 18:41:44.861518 kernel: audit: type=1130 audit(1769193704.793:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:44.861541 kernel: audit: type=1131 audit(1769193704.793:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:44.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:44.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:44.865225 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:41:44.983639 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 23 18:41:45.006189 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:41:45.006239 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:41:45.045500 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:41:45.045553 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:41:45.087769 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:41:45.102894 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:41:45.156654 kernel: audit: type=1130 audit(1769193705.117:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:45.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:45.124764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:41:45.156560 systemd-networkd[727]: eth0: Gained IPv6LL Jan 23 18:41:45.854731 kernel: hrtimer: interrupt took 9079280 ns Jan 23 18:41:47.857804 ignition[886]: Ignition 2.24.0 Jan 23 18:41:47.857911 ignition[886]: Stage: fetch-offline Jan 23 18:41:47.859406 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:41:47.859423 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:41:47.861416 ignition[886]: parsed url from cmdline: "" Jan 23 18:41:47.861424 ignition[886]: no config URL provided Jan 23 18:41:47.864670 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:41:47.864688 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:41:47.865248 ignition[886]: op(1): [started] loading QEMU firmware config module Jan 23 18:41:47.865254 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 18:41:47.991226 ignition[886]: op(1): [finished] loading QEMU firmware config module Jan 23 18:41:48.396531 ignition[886]: parsing config with SHA512: acfb1e698602bffb894c52609cc41aa7b1ba8a7a05068308b3bf114e5d6ee3e1bb5e03bd13e1e437b19b69901b0c024cf9aeeb25f31966830e4435dd9db741a9 Jan 23 18:41:48.508463 unknown[886]: fetched base config from "system" Jan 23 18:41:48.508555 unknown[886]: fetched user config from "qemu" Jan 23 18:41:48.575536 kernel: audit: type=1130 audit(1769193708.537:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:48.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:48.515473 ignition[886]: fetch-offline: fetch-offline passed Jan 23 18:41:48.520884 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:41:48.515776 ignition[886]: Ignition finished successfully Jan 23 18:41:48.539417 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 18:41:48.542860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:41:49.542735 ignition[897]: Ignition 2.24.0 Jan 23 18:41:49.544267 ignition[897]: Stage: kargs Jan 23 18:41:49.546608 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:41:49.546709 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:41:49.585495 ignition[897]: kargs: kargs passed Jan 23 18:41:49.585673 ignition[897]: Ignition finished successfully Jan 23 18:41:49.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:49.609855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:41:49.623751 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:41:49.680426 kernel: audit: type=1130 audit(1769193709.621:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:49.979434 ignition[905]: Ignition 2.24.0 Jan 23 18:41:49.979535 ignition[905]: Stage: disks Jan 23 18:41:49.979672 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:41:49.979682 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:41:49.997600 ignition[905]: disks: disks passed Jan 23 18:41:49.997686 ignition[905]: Ignition finished successfully Jan 23 18:41:50.040167 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:41:50.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:50.077166 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:41:50.114489 kernel: audit: type=1130 audit(1769193710.072:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:50.142511 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:41:50.143191 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:41:50.201728 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:41:50.202449 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:41:50.246828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:41:50.401874 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 23 18:41:50.419398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:41:50.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:50.456885 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:41:50.479624 kernel: audit: type=1130 audit(1769193710.449:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:51.039651 kernel: EXT4-fs (vda9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:41:51.058658 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:41:51.069597 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:41:51.103469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:41:51.120206 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:41:51.157796 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:41:51.158232 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:41:51.158636 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:41:51.256192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:41:51.303632 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Jan 23 18:41:51.303667 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:41:51.303679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:41:51.329854 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:41:51.330228 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:41:51.334914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:41:51.366499 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:41:52.005726 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:41:52.053261 kernel: audit: type=1130 audit(1769193712.006:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:52.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:52.008821 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:41:52.077751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:41:52.145285 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:41:52.168637 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:41:52.244224 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:41:52.300736 kernel: audit: type=1130 audit(1769193712.261:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:52.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:52.509464 ignition[1024]: INFO : Ignition 2.24.0 Jan 23 18:41:52.509464 ignition[1024]: INFO : Stage: mount Jan 23 18:41:52.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:52.545879 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:41:52.545879 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:41:52.545879 ignition[1024]: INFO : mount: mount passed Jan 23 18:41:52.545879 ignition[1024]: INFO : Ignition finished successfully Jan 23 18:41:52.650775 kernel: audit: type=1130 audit(1769193712.545:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:41:52.516521 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:41:52.548450 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:41:52.697238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:41:53.112478 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1035) Jan 23 18:41:53.134286 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:41:53.134445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:41:53.171562 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:41:53.171612 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:41:53.177634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:41:53.592280 ignition[1051]: INFO : Ignition 2.24.0 Jan 23 18:41:53.592280 ignition[1051]: INFO : Stage: files Jan 23 18:41:53.611468 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:41:53.611468 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:41:53.611468 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:41:53.611468 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:41:53.611468 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:41:53.708461 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:41:53.708461 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:41:53.708461 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:41:53.708461 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:41:53.708461 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:41:53.631153 unknown[1051]: wrote ssh authorized keys file for user: core Jan 23 18:41:54.035132 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:41:54.392743 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:41:54.424639 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:41:54.717793 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:41:54.717793 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:41:54.717793 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:41:54.992757 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:42:04.648628 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:42:04.648628 ignition[1051]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:42:04.697456 ignition[1051]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:42:04.733269 ignition[1051]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:42:04.733269 ignition[1051]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:42:04.733269 ignition[1051]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:42:04.733269 ignition[1051]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:42:04.808799 ignition[1051]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:42:04.808799 ignition[1051]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:42:04.808799 ignition[1051]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 18:42:04.887589 ignition[1051]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:42:04.929271 ignition[1051]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:42:04.949914 ignition[1051]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 18:42:04.949914 ignition[1051]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:42:04.949914 ignition[1051]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:42:05.038663 ignition[1051]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:42:05.038663 ignition[1051]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:42:05.038663 ignition[1051]: INFO : files: files passed Jan 23 18:42:05.038663 ignition[1051]: INFO : Ignition finished successfully Jan 23 18:42:05.152633 kernel: audit: type=1130 audit(1769193725.038:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.000264 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:42:05.046289 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:42:05.169142 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:42:05.226526 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:42:05.226867 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:42:05.242323 initrd-setup-root-after-ignition[1083]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 18:42:05.254894 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:42:05.266234 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:42:05.266234 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:42:05.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.332522 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:42:05.460170 kernel: audit: type=1130 audit(1769193725.331:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.460205 kernel: audit: type=1131 audit(1769193725.331:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.460217 kernel: audit: type=1130 audit(1769193725.394:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.460874 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:42:05.487235 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:42:05.695596 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:42:05.695906 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:42:05.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.745168 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:42:05.830752 kernel: audit: type=1130 audit(1769193725.744:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.830790 kernel: audit: type=1131 audit(1769193725.744:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.806861 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:42:05.855615 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:42:05.864155 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:42:05.990291 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:42:06.051912 kernel: audit: type=1130 audit(1769193725.990:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:05.995707 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:42:06.117656 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:42:06.118229 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:42:06.151462 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:42:06.167595 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:42:06.220524 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:42:06.220913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:42:06.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.264909 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:42:06.326860 kernel: audit: type=1131 audit(1769193726.264:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.306688 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:42:06.307107 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:42:06.327584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:42:06.386700 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:42:06.401647 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:42:06.437322 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:42:06.454831 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:42:06.501156 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:42:06.516735 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:42:06.523676 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:42:06.612137 kernel: audit: type=1131 audit(1769193726.562:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.550800 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:42:06.551591 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:42:06.638651 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:42:06.649516 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:42:06.678632 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:42:06.702264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:42:06.717335 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:42:06.783905 kernel: audit: type=1131 audit(1769193726.733:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.717714 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:42:06.784532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:42:06.784869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:42:06.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:06.821768 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:42:06.844632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:42:06.866875 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:42:06.898760 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:42:06.899551 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:42:06.917889 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:42:06.918539 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:42:06.960867 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:42:06.961625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:42:06.992779 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:42:06.992894 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:42:07.004304 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:42:07.004611 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:42:07.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.058892 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:42:07.059574 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:42:07.099597 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:42:07.110887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:42:07.111326 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:42:07.127214 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:42:07.151326 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:42:07.151687 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:42:07.169482 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:42:07.169694 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:42:07.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.187312 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:42:07.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:07.187649 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:42:07.228752 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:42:07.256889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:42:07.361581 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:42:07.382500 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:42:07.382806 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:42:07.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.045717 ignition[1110]: INFO : Ignition 2.24.0 Jan 23 18:42:08.045717 ignition[1110]: INFO : Stage: umount Jan 23 18:42:08.063514 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:08.063514 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:08.063514 ignition[1110]: INFO : umount: umount passed Jan 23 18:42:08.063514 ignition[1110]: INFO : Ignition finished successfully Jan 23 18:42:08.098329 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:42:08.099291 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:42:08.108770 systemd[1]: Stopped target network.target - Network. Jan 23 18:42:08.133111 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:42:08.133182 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:42:08.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.158837 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:42:08.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.159130 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:42:08.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.174182 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:42:08.174261 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:42:08.199241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:42:08.199325 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:42:08.220742 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:42:08.220815 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:42:08.231761 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:42:08.255525 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:42:08.360649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:42:08.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.360890 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:42:08.410690 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:42:08.422716 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:42:08.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.449857 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:42:08.450000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:42:08.474855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:42:08.475261 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:42:08.477740 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:42:08.500796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:42:08.500863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:42:08.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.559836 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:42:08.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.559910 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:42:08.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.576241 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:42:08.576299 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:42:08.601782 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:42:08.661000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:42:08.673257 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:42:08.673904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:42:08.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.720803 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:42:08.720899 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:42:08.721475 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:42:08.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.721534 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:42:08.744671 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:42:08.744747 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:42:08.823329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:42:08.823661 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:42:08.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.857346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:42:08.857560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:42:08.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.898695 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:42:08.906890 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:42:08.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.907174 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:42:08.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.934797 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:42:08.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:08.934865 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:42:08.959548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:42:08.959626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:09.062222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:42:09.080830 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:42:09.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.113577 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:42:09.114150 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:42:09.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.159693 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:42:09.189311 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:42:09.242606 systemd[1]: Switching root. Jan 23 18:42:09.317215 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 23 18:42:09.317274 systemd-journald[320]: Journal stopped Jan 23 18:42:15.238777 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:42:15.239310 kernel: SELinux: policy capability open_perms=1 Jan 23 18:42:15.239332 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:42:15.239345 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:42:15.239356 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:42:15.239367 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:42:15.239479 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:42:15.239499 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:42:15.239510 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:42:15.239523 systemd[1]: Successfully loaded SELinux policy in 167.312ms. Jan 23 18:42:15.239546 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.118ms. Jan 23 18:42:15.239559 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:42:15.239572 systemd[1]: Detected virtualization kvm. Jan 23 18:42:15.239583 systemd[1]: Detected architecture x86-64. Jan 23 18:42:15.239597 systemd[1]: Detected first boot. Jan 23 18:42:15.239609 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:42:15.239621 zram_generator::config[1156]: No configuration found. Jan 23 18:42:15.239638 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1007086068 wd_nsec: 1007085961 Jan 23 18:42:15.239650 kernel: Guest personality initialized and is inactive Jan 23 18:42:15.239661 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:42:15.239675 kernel: Initialized host personality Jan 23 18:42:15.239686 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:42:15.239698 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:42:15.239714 kernel: kauditd_printk_skb: 39 callbacks suppressed Jan 23 18:42:15.239729 kernel: audit: type=1334 audit(1769193732.156:88): prog-id=12 op=LOAD Jan 23 18:42:15.239740 kernel: audit: type=1334 audit(1769193732.156:89): prog-id=3 op=UNLOAD Jan 23 18:42:15.239751 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:42:15.239773 kernel: audit: type=1334 audit(1769193732.156:90): prog-id=13 op=LOAD Jan 23 18:42:15.239794 kernel: audit: type=1334 audit(1769193732.156:91): prog-id=14 op=LOAD Jan 23 18:42:15.239813 kernel: audit: type=1334 audit(1769193732.156:92): prog-id=4 op=UNLOAD Jan 23 18:42:15.239825 kernel: audit: type=1334 audit(1769193732.156:93): prog-id=5 op=UNLOAD Jan 23 18:42:15.239842 kernel: audit: type=1131 audit(1769193732.160:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.239852 kernel: audit: type=1334 audit(1769193732.210:95): prog-id=12 op=UNLOAD Jan 23 18:42:15.239864 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:42:15.239880 kernel: audit: type=1130 audit(1769193732.308:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.239891 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:42:15.239903 kernel: audit: type=1131 audit(1769193732.308:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.240072 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:42:15.240101 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:42:15.240116 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:42:15.240128 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:42:15.240140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:42:15.240152 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:42:15.240166 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:42:15.240178 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:42:15.240191 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:42:15.240203 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:42:15.240215 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:42:15.240226 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:42:15.240238 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:42:15.240252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:42:15.240264 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:42:15.240275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:42:15.240287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:42:15.240299 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:42:15.240310 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:42:15.240322 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:42:15.240336 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:42:15.240348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:42:15.240359 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:42:15.240373 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:42:15.240515 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:42:15.240537 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:42:15.240548 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:42:15.240563 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:42:15.240575 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:42:15.240587 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:42:15.240599 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:42:15.240610 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:42:15.240622 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:42:15.240633 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:42:15.240647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:42:15.240659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:42:15.240670 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:42:15.240682 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:42:15.240693 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:42:15.240705 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:42:15.240716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:15.240730 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:42:15.240741 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:42:15.240754 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:42:15.240766 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:42:15.240778 systemd[1]: Reached target machines.target - Containers. Jan 23 18:42:15.240789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:42:15.240801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:42:15.240815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:42:15.240827 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:42:15.240838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:42:15.240850 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:42:15.240861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:42:15.240873 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:42:15.240885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:42:15.240899 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:42:15.240910 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:42:15.241310 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:42:15.241328 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:42:15.241340 kernel: ACPI: bus type drm_connector registered Jan 23 18:42:15.241351 kernel: fuse: init (API version 7.41) Jan 23 18:42:15.241365 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:42:15.241377 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:42:15.241599 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:42:15.241613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:42:15.241628 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:42:15.241641 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:42:15.241652 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:42:15.241664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:42:15.241677 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:15.241688 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:42:15.241700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:42:15.241715 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:42:15.241726 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:42:15.241738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:42:15.241750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:42:15.241762 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:42:15.241774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:42:15.241786 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:42:15.241800 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:42:15.241811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:42:15.241825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:42:15.241839 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:42:15.241851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:42:15.241863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:42:15.241874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:42:15.241886 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:42:15.241900 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:42:15.241912 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:42:15.242114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:42:15.242127 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:42:15.242140 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:42:15.242176 systemd-journald[1242]: Collecting audit messages is enabled. Jan 23 18:42:15.242204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:42:15.242217 systemd-journald[1242]: Journal started Jan 23 18:42:15.242236 systemd-journald[1242]: Runtime Journal (/run/log/journal/1a33a009fe36441b9af836476b3262d7) is 6M, max 48M, 42M free. Jan 23 18:42:13.292000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 18:42:14.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:14.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:14.415000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:42:14.415000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:42:14.418000 audit: BPF prog-id=15 op=LOAD Jan 23 18:42:14.419000 audit: BPF prog-id=16 op=LOAD Jan 23 18:42:14.419000 audit: BPF prog-id=17 op=LOAD Jan 23 18:42:14.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:14.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:14.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:14.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.234000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:42:15.234000 audit[1242]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdf132c660 a2=4000 a3=0 items=0 ppid=1 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:15.234000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:42:12.136897 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:42:12.158364 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:42:12.160290 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:42:12.161702 systemd[1]: systemd-journald.service: Consumed 3.556s CPU time. Jan 23 18:42:15.280181 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:42:15.308670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:42:15.317153 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:42:15.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.331245 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:42:15.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.347098 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:42:15.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.358544 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:42:15.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.369829 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:42:15.379566 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:42:15.395240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:42:15.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.405896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:42:15.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.429502 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:42:15.440154 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:42:15.451230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:42:15.451355 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:42:15.462641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:42:15.473333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:42:15.473646 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:42:15.476714 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:42:15.486348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:42:15.496489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:42:15.516773 systemd-journald[1242]: Time spent on flushing to /var/log/journal/1a33a009fe36441b9af836476b3262d7 is 30.976ms for 1205 entries. Jan 23 18:42:15.516773 systemd-journald[1242]: System Journal (/var/log/journal/1a33a009fe36441b9af836476b3262d7) is 8M, max 163.5M, 155.5M free. Jan 23 18:42:15.586706 systemd-journald[1242]: Received client request to flush runtime journal. Jan 23 18:42:15.586794 kernel: loop1: detected capacity change from 0 to 50784 Jan 23 18:42:15.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.505634 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:42:15.526786 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:42:15.553561 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:42:15.565599 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:42:15.578100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:42:15.602217 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:42:15.614070 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:42:15.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.669500 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:42:15.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.684529 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:42:15.691147 kernel: loop2: detected capacity change from 0 to 111560 Jan 23 18:42:15.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.708000 audit: BPF prog-id=18 op=LOAD Jan 23 18:42:15.708000 audit: BPF prog-id=19 op=LOAD Jan 23 18:42:15.708000 audit: BPF prog-id=20 op=LOAD Jan 23 18:42:15.710887 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:42:15.721000 audit: BPF prog-id=21 op=LOAD Jan 23 18:42:15.726231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:42:15.738702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:42:15.747642 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:42:15.767182 kernel: loop3: detected capacity change from 0 to 229808 Jan 23 18:42:15.767000 audit: BPF prog-id=22 op=LOAD Jan 23 18:42:15.767000 audit: BPF prog-id=23 op=LOAD Jan 23 18:42:15.768000 audit: BPF prog-id=24 op=LOAD Jan 23 18:42:15.772669 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:42:15.780000 audit: BPF prog-id=25 op=LOAD Jan 23 18:42:15.781000 audit: BPF prog-id=26 op=LOAD Jan 23 18:42:15.781000 audit: BPF prog-id=27 op=LOAD Jan 23 18:42:15.784781 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:42:15.822894 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 23 18:42:15.823115 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 23 18:42:15.834562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:42:15.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.854165 kernel: loop4: detected capacity change from 0 to 50784 Jan 23 18:42:15.881367 systemd-nsresourced[1298]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:42:15.899126 kernel: loop5: detected capacity change from 0 to 111560 Jan 23 18:42:15.884142 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:42:15.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.900816 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:42:15.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.927188 kernel: loop6: detected capacity change from 0 to 229808 Jan 23 18:42:15.950820 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 23 18:42:15.959288 (sd-merge)[1302]: Merged extensions into '/usr'. Jan 23 18:42:15.967467 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:42:15.967494 systemd[1]: Reloading... Jan 23 18:42:16.037489 systemd-oomd[1293]: No swap; memory pressure usage will be degraded Jan 23 18:42:16.046106 systemd-resolved[1295]: Positive Trust Anchors: Jan 23 18:42:16.048330 systemd-resolved[1295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:42:16.048475 systemd-resolved[1295]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:42:16.048554 systemd-resolved[1295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:42:16.061789 systemd-resolved[1295]: Defaulting to hostname 'linux'. Jan 23 18:42:16.080347 zram_generator::config[1347]: No configuration found. Jan 23 18:42:16.351758 systemd[1]: Reloading finished in 383 ms. Jan 23 18:42:16.385726 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:42:16.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.397169 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:42:16.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.407650 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:42:16.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.417554 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:42:16.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.437773 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:42:16.467846 systemd[1]: Starting ensure-sysext.service... Jan 23 18:42:16.475703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:42:16.484000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:42:16.484000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:42:16.484000 audit: BPF prog-id=28 op=LOAD Jan 23 18:42:16.484000 audit: BPF prog-id=29 op=LOAD Jan 23 18:42:16.486718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:42:16.494000 audit: BPF prog-id=30 op=LOAD Jan 23 18:42:16.494000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:42:16.494000 audit: BPF prog-id=31 op=LOAD Jan 23 18:42:16.494000 audit: BPF prog-id=32 op=LOAD Jan 23 18:42:16.494000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:42:16.494000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:42:16.495000 audit: BPF prog-id=33 op=LOAD Jan 23 18:42:16.495000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:42:16.496000 audit: BPF prog-id=34 op=LOAD Jan 23 18:42:16.496000 audit: BPF prog-id=35 op=LOAD Jan 23 18:42:16.496000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:42:16.496000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:42:16.497000 audit: BPF prog-id=36 op=LOAD Jan 23 18:42:16.515000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:42:16.515000 audit: BPF prog-id=37 op=LOAD Jan 23 18:42:16.515000 audit: BPF prog-id=38 op=LOAD Jan 23 18:42:16.516000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:42:16.516000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:42:16.516000 audit: BPF prog-id=39 op=LOAD Jan 23 18:42:16.516000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:42:16.519000 audit: BPF prog-id=40 op=LOAD Jan 23 18:42:16.519000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:42:16.519000 audit: BPF prog-id=41 op=LOAD Jan 23 18:42:16.519000 audit: BPF prog-id=42 op=LOAD Jan 23 18:42:16.519000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:42:16.519000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:42:16.527597 systemd[1]: Reload requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:42:16.527672 systemd[1]: Reloading... Jan 23 18:42:16.540805 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:42:16.541053 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:42:16.541563 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:42:16.543914 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 23 18:42:16.544207 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 23 18:42:16.556619 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:42:16.556704 systemd-tmpfiles[1385]: Skipping /boot Jan 23 18:42:16.561689 systemd-udevd[1386]: Using default interface naming scheme 'v257'. Jan 23 18:42:16.582696 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:42:16.582778 systemd-tmpfiles[1385]: Skipping /boot Jan 23 18:42:16.627167 zram_generator::config[1421]: No configuration found. Jan 23 18:42:16.817615 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:42:16.817682 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 18:42:16.832122 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:42:16.854902 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:42:16.855489 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:42:16.864769 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:42:16.931372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:42:16.940554 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:42:16.941206 systemd[1]: Reloading finished in 412 ms. Jan 23 18:42:16.957238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:42:16.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.967000 audit: BPF prog-id=43 op=LOAD Jan 23 18:42:16.969000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:42:16.969000 audit: BPF prog-id=44 op=LOAD Jan 23 18:42:16.969000 audit: BPF prog-id=45 op=LOAD Jan 23 18:42:16.969000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:42:16.969000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:42:16.970000 audit: BPF prog-id=46 op=LOAD Jan 23 18:42:16.970000 audit: BPF prog-id=47 op=LOAD Jan 23 18:42:16.970000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:42:16.970000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:42:16.971000 audit: BPF prog-id=48 op=LOAD Jan 23 18:42:16.971000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:42:16.971000 audit: BPF prog-id=49 op=LOAD Jan 23 18:42:16.971000 audit: BPF prog-id=50 op=LOAD Jan 23 18:42:16.971000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:42:16.971000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:42:16.974000 audit: BPF prog-id=51 op=LOAD Jan 23 18:42:16.974000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:42:16.974000 audit: BPF prog-id=52 op=LOAD Jan 23 18:42:16.974000 audit: BPF prog-id=53 op=LOAD Jan 23 18:42:16.974000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:42:16.974000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:42:16.976000 audit: BPF prog-id=54 op=LOAD Jan 23 18:42:16.976000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:42:16.977000 audit: BPF prog-id=55 op=LOAD Jan 23 18:42:16.981000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:42:16.981000 audit: BPF prog-id=56 op=LOAD Jan 23 18:42:16.981000 audit: BPF prog-id=57 op=LOAD Jan 23 18:42:16.981000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:42:16.981000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:42:16.991538 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:42:17.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:17.063568 systemd[1]: Finished ensure-sysext.service. Jan 23 18:42:17.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:17.294854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:17.298178 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:42:17.313500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:42:17.320775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:42:17.356679 kernel: kvm_amd: TSC scaling supported Jan 23 18:42:17.356771 kernel: kvm_amd: Nested Virtualization enabled Jan 23 18:42:17.361790 kernel: kvm_amd: Nested Paging enabled Jan 23 18:42:17.366166 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 18:42:17.366201 kernel: kvm_amd: PMU virtualization is disabled Jan 23 18:42:17.381552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:42:17.401636 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:42:17.413822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:42:17.432725 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:42:17.439474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:42:17.440080 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:42:17.510485 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:42:17.524517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:42:17.532820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:42:17.551350 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:42:17.579582 kernel: kauditd_printk_skb: 116 callbacks suppressed Jan 23 18:42:17.579648 kernel: audit: type=1334 audit(1769193737.565:212): prog-id=58 op=LOAD Jan 23 18:42:17.565000 audit: BPF prog-id=58 op=LOAD Jan 23 18:42:17.580519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:42:17.587000 audit: BPF prog-id=59 op=LOAD Jan 23 18:42:17.596273 kernel: audit: type=1334 audit(1769193737.587:213): prog-id=59 op=LOAD Jan 23 18:42:17.596488 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:42:17.607603 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:42:17.629592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:42:17.638912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:17.646837 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:42:17.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:17.657882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:42:17.658576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:42:17.677358 kernel: audit: type=1130 audit(1769193737.657:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:17.677490 kernel: audit: type=1305 audit(1769193737.676:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:42:17.676000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:42:17.677560 augenrules[1533]: No rules Jan 23 18:42:17.719664 kernel: audit: type=1300 audit(1769193737.676:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd09341010 a2=420 a3=0 items=0 ppid=1500 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:17.719758 kernel: audit: type=1327 audit(1769193737.676:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:42:17.676000 audit[1533]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd09341010 a2=420 a3=0 items=0 ppid=1500 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:17.676000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:42:17.730186 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:42:17.731735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:42:17.740118 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:42:17.740609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:42:17.749667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:42:17.755250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:42:17.766259 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:42:17.767247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:42:17.776789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:42:17.792604 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:42:17.828743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:42:17.828868 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:42:17.829129 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:42:17.836337 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:42:17.919075 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:42:17.947259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:17.950613 systemd-networkd[1525]: lo: Link UP Jan 23 18:42:17.951527 systemd-networkd[1525]: lo: Gained carrier Jan 23 18:42:17.955339 systemd-networkd[1525]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:42:17.955548 systemd-networkd[1525]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:42:17.956698 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:42:17.963129 systemd-networkd[1525]: eth0: Link UP Jan 23 18:42:17.963601 systemd[1]: Reached target network.target - Network. Jan 23 18:42:17.965846 systemd-networkd[1525]: eth0: Gained carrier Jan 23 18:42:17.966226 systemd-networkd[1525]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:42:17.972078 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:42:17.983689 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:42:17.994317 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:42:18.005507 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:42:18.007293 systemd-networkd[1525]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:42:18.008536 systemd-timesyncd[1528]: Network configuration changed, trying to establish connection. Jan 23 18:42:18.722192 systemd-resolved[1295]: Clock change detected. Flushing caches. Jan 23 18:42:18.722446 systemd-timesyncd[1528]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 18:42:18.722754 systemd-timesyncd[1528]: Initial clock synchronization to Fri 2026-01-23 18:42:18.722157 UTC. Jan 23 18:42:18.756152 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:42:19.129294 ldconfig[1512]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:42:19.144031 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:42:19.160480 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:42:19.255298 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:42:19.266192 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:42:19.275697 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:42:19.286895 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:42:19.297698 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:42:19.307889 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:42:19.317533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:42:19.328689 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:42:19.339760 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:42:19.351006 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:42:19.362773 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:42:19.362903 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:42:19.370545 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:42:19.380731 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:42:19.391940 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:42:19.403048 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:42:19.412688 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:42:19.421751 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:42:19.434129 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:42:19.442764 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:42:19.452728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:42:19.462002 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:42:19.469868 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:42:19.478058 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:42:19.478173 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:42:19.479960 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:42:19.489894 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:42:19.511972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:42:19.523127 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:42:19.540992 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:42:19.548110 jq[1570]: false Jan 23 18:42:19.548060 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:42:19.550681 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:42:19.560685 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:42:19.569122 extend-filesystems[1571]: Found /dev/vda6 Jan 23 18:42:19.570610 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:42:19.574951 oslogin_cache_refresh[1572]: Refreshing passwd entry cache Jan 23 18:42:19.576678 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Refreshing passwd entry cache Jan 23 18:42:19.580564 extend-filesystems[1571]: Found /dev/vda9 Jan 23 18:42:19.591042 extend-filesystems[1571]: Checking size of /dev/vda9 Jan 23 18:42:19.589040 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:42:19.651258 oslogin_cache_refresh[1572]: Failure getting users, quitting Jan 23 18:42:19.656583 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Failure getting users, quitting Jan 23 18:42:19.656583 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:42:19.656583 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Refreshing group entry cache Jan 23 18:42:19.592961 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:42:19.651286 oslogin_cache_refresh[1572]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:42:19.607718 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:42:19.651520 oslogin_cache_refresh[1572]: Refreshing group entry cache Jan 23 18:42:19.607989 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:42:19.608742 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:42:19.612579 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:42:19.614774 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:42:19.625057 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:42:19.625901 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:42:19.628511 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:42:19.631156 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:42:19.631602 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:42:19.663571 jq[1587]: true Jan 23 18:42:19.673495 extend-filesystems[1571]: Resized partition /dev/vda9 Jan 23 18:42:19.683029 extend-filesystems[1612]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:42:19.698595 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 23 18:42:19.682473 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:42:19.698697 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Failure getting groups, quitting Jan 23 18:42:19.698697 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:42:19.680613 oslogin_cache_refresh[1572]: Failure getting groups, quitting Jan 23 18:42:19.680628 oslogin_cache_refresh[1572]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:42:19.700502 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:42:19.709771 update_engine[1585]: I20260123 18:42:19.707656 1585 main.cc:92] Flatcar Update Engine starting Jan 23 18:42:19.711099 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:42:19.712941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:42:19.720656 jq[1609]: true Jan 23 18:42:19.735675 tar[1590]: linux-amd64/LICENSE Jan 23 18:42:19.738700 tar[1590]: linux-amd64/helm Jan 23 18:42:19.775190 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 23 18:42:19.799286 extend-filesystems[1612]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:42:19.799286 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 18:42:19.799286 extend-filesystems[1612]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 23 18:42:19.848991 extend-filesystems[1571]: Resized filesystem in /dev/vda9 Jan 23 18:42:19.805896 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:42:19.806229 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:42:19.851036 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:42:19.851066 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:42:19.860609 systemd-logind[1584]: New seat seat0. Jan 23 18:42:19.873637 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:42:19.898616 dbus-daemon[1568]: [system] SELinux support is enabled Jan 23 18:42:19.898995 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:42:19.911011 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:42:19.911045 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:42:19.911665 dbus-daemon[1568]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 18:42:19.916141 update_engine[1585]: I20260123 18:42:19.916003 1585 update_check_scheduler.cc:74] Next update check in 5m47s Jan 23 18:42:19.920014 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:42:19.920120 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:42:19.934014 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:42:19.949281 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:42:19.954896 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:42:19.959789 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:42:19.973504 systemd-networkd[1525]: eth0: Gained IPv6LL Jan 23 18:42:19.973563 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:42:19.978701 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:42:19.990581 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:42:20.000704 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 18:42:20.015776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:20.032237 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:42:20.123012 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 18:42:20.123564 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 18:42:20.124534 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:42:20.136474 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:42:20.169899 containerd[1599]: time="2026-01-23T18:42:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:42:20.172095 containerd[1599]: time="2026-01-23T18:42:20.172065809Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:42:20.187542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:42:20.197911 containerd[1599]: time="2026-01-23T18:42:20.197281525Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.414µs" Jan 23 18:42:20.197911 containerd[1599]: time="2026-01-23T18:42:20.197725935Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:42:20.197911 containerd[1599]: time="2026-01-23T18:42:20.197764687Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:42:20.197911 containerd[1599]: time="2026-01-23T18:42:20.197776208Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198025624Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198042215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198106114Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198116504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198532180Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198547919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198557747Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198565312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198730330Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198742312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.198945351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.199706 containerd[1599]: time="2026-01-23T18:42:20.199173697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.200113 containerd[1599]: time="2026-01-23T18:42:20.199203112Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:42:20.200113 containerd[1599]: time="2026-01-23T18:42:20.199211478Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:42:20.200113 containerd[1599]: time="2026-01-23T18:42:20.199522709Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:42:20.201023 containerd[1599]: time="2026-01-23T18:42:20.201000829Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:42:20.201145 containerd[1599]: time="2026-01-23T18:42:20.201126513Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212053234Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212104740Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212197243Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212214375Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212226918Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212237639Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212247467Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212257716Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212275550Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212297480Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212483287Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212503004Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212514906Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:42:20.213041 containerd[1599]: time="2026-01-23T18:42:20.212530515Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:42:20.213654 containerd[1599]: time="2026-01-23T18:42:20.213637281Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:42:20.213712 containerd[1599]: time="2026-01-23T18:42:20.213700479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:42:20.213758 containerd[1599]: time="2026-01-23T18:42:20.213747437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:42:20.214125 containerd[1599]: time="2026-01-23T18:42:20.213905432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214452494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214473743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214484703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214493440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214502487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214511473Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214520570Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214541329Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214581484Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:42:20.214613 containerd[1599]: time="2026-01-23T18:42:20.214593006Z" level=info msg="Start snapshots syncer" Jan 23 18:42:20.215256 containerd[1599]: time="2026-01-23T18:42:20.215234243Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:42:20.216244 containerd[1599]: time="2026-01-23T18:42:20.216146176Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:42:20.216244 containerd[1599]: time="2026-01-23T18:42:20.216194556Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217123260Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217233886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217251869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217261527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217271546Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217292425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:42:20.217467 containerd[1599]: time="2026-01-23T18:42:20.217302665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:42:20.217654 containerd[1599]: time="2026-01-23T18:42:20.217637349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:42:20.217714 containerd[1599]: time="2026-01-23T18:42:20.217700818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:42:20.217758 containerd[1599]: time="2026-01-23T18:42:20.217747996Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:42:20.218564 containerd[1599]: time="2026-01-23T18:42:20.218156198Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:42:20.218564 containerd[1599]: time="2026-01-23T18:42:20.218176296Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:42:20.218564 containerd[1599]: time="2026-01-23T18:42:20.218185202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:42:20.218564 containerd[1599]: time="2026-01-23T18:42:20.218193518Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:42:20.219019 containerd[1599]: time="2026-01-23T18:42:20.218200651Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:42:20.219106 containerd[1599]: time="2026-01-23T18:42:20.219086455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:42:20.219182 containerd[1599]: time="2026-01-23T18:42:20.219161846Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:42:20.219260 containerd[1599]: time="2026-01-23T18:42:20.219243257Z" level=info msg="runtime interface created" Jan 23 18:42:20.219534 containerd[1599]: time="2026-01-23T18:42:20.219517790Z" level=info msg="created NRI interface" Jan 23 18:42:20.219610 containerd[1599]: time="2026-01-23T18:42:20.219594233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:42:20.220182 containerd[1599]: time="2026-01-23T18:42:20.219656810Z" level=info msg="Connect containerd service" Jan 23 18:42:20.220182 containerd[1599]: time="2026-01-23T18:42:20.219687537Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:42:20.223090 containerd[1599]: time="2026-01-23T18:42:20.223068790Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:42:20.273535 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:42:20.325174 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:42:20.342964 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:42:20.388740 tar[1590]: linux-amd64/README.md Jan 23 18:42:20.392166 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:42:20.394136 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:42:20.407243 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407287233Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407571873Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407600858Z" level=info msg="Start subscribing containerd event" Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407622258Z" level=info msg="Start recovering state" Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407705984Z" level=info msg="Start event monitor" Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407718007Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407724919Z" level=info msg="Start streaming server" Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407732353Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407739016Z" level=info msg="runtime interface starting up..." Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407744366Z" level=info msg="starting plugins..." Jan 23 18:42:20.407779 containerd[1599]: time="2026-01-23T18:42:20.407756177Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:42:20.408217 containerd[1599]: time="2026-01-23T18:42:20.408021854Z" level=info msg="containerd successfully booted in 0.239342s" Jan 23 18:42:20.416196 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:42:20.440651 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:42:20.453005 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:42:20.468686 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:42:20.480500 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:42:20.490228 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:42:21.514714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:21.524698 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:42:21.533135 systemd[1]: Startup finished in 8.950s (kernel) + 31.586s (initrd) + 11.304s (userspace) = 51.841s. Jan 23 18:42:21.537074 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:42:24.393226 kubelet[1708]: E0123 18:42:24.389112 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:42:24.420931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:42:24.422646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:42:24.427001 systemd[1]: kubelet.service: Consumed 3.122s CPU time, 273.1M memory peak. Jan 23 18:42:28.761662 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:42:28.764566 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Jan 23 18:42:28.935456 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:42:28.939971 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:28.962157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:42:28.964616 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:42:28.975499 systemd-logind[1584]: New session 1 of user core. Jan 23 18:42:29.021568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:42:29.026655 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:42:29.068754 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:29.077222 systemd-logind[1584]: New session 2 of user core. Jan 23 18:42:29.745981 systemd[1728]: Queued start job for default target default.target. Jan 23 18:42:29.771707 systemd[1728]: Created slice app.slice - User Application Slice. Jan 23 18:42:29.771746 systemd[1728]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:42:29.771769 systemd[1728]: Reached target paths.target - Paths. Jan 23 18:42:29.771942 systemd[1728]: Reached target timers.target - Timers. Jan 23 18:42:29.775143 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:42:29.777144 systemd[1728]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:42:29.798578 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:42:29.798704 systemd[1728]: Reached target sockets.target - Sockets. Jan 23 18:42:29.805192 systemd[1728]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:42:29.805593 systemd[1728]: Reached target basic.target - Basic System. Jan 23 18:42:29.805723 systemd[1728]: Reached target default.target - Main User Target. Jan 23 18:42:29.805758 systemd[1728]: Startup finished in 705ms. Jan 23 18:42:29.806576 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:42:29.816953 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:42:29.875228 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:34132.service - OpenSSH per-connection server daemon (10.0.0.1:34132). Jan 23 18:42:29.983245 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 34132 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:42:29.986633 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:29.997652 systemd-logind[1584]: New session 3 of user core. Jan 23 18:42:30.012924 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:42:30.048837 sshd[1746]: Connection closed by 10.0.0.1 port 34132 Jan 23 18:42:30.049615 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:30.066481 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:34132.service: Deactivated successfully. Jan 23 18:42:30.069600 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:42:30.071462 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:42:30.078615 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:34134.service - OpenSSH per-connection server daemon (10.0.0.1:34134). Jan 23 18:42:30.079994 systemd-logind[1584]: Removed session 3. Jan 23 18:42:30.181704 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 34134 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:42:30.184092 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:30.194220 systemd-logind[1584]: New session 4 of user core. Jan 23 18:42:30.223076 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:42:30.262611 sshd[1756]: Connection closed by 10.0.0.1 port 34134 Jan 23 18:42:30.262723 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:30.275172 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:34134.service: Deactivated successfully. Jan 23 18:42:30.278082 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:42:30.280782 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:42:30.285606 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:34136.service - OpenSSH per-connection server daemon (10.0.0.1:34136). Jan 23 18:42:30.286810 systemd-logind[1584]: Removed session 4. Jan 23 18:42:30.392661 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 34136 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:42:30.395799 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:30.408712 systemd-logind[1584]: New session 5 of user core. Jan 23 18:42:30.437241 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:42:30.483019 sshd[1766]: Connection closed by 10.0.0.1 port 34136 Jan 23 18:42:30.483579 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:30.499508 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:34136.service: Deactivated successfully. Jan 23 18:42:30.503037 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:42:30.505051 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:42:30.510674 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:34140.service - OpenSSH per-connection server daemon (10.0.0.1:34140). Jan 23 18:42:30.511928 systemd-logind[1584]: Removed session 5. Jan 23 18:42:30.637810 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 34140 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:42:30.641178 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:30.662260 systemd-logind[1584]: New session 6 of user core. Jan 23 18:42:30.690207 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:42:30.749072 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:42:30.750087 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:42:34.638085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:42:34.642682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:35.664259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:35.686008 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:42:36.253255 kubelet[1808]: E0123 18:42:36.252650 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:42:36.264278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:42:36.264735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:42:36.265612 systemd[1]: kubelet.service: Consumed 1.267s CPU time, 110.4M memory peak. Jan 23 18:42:36.343811 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:42:36.403628 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:42:40.844849 dockerd[1818]: time="2026-01-23T18:42:40.843544555Z" level=info msg="Starting up" Jan 23 18:42:40.851642 dockerd[1818]: time="2026-01-23T18:42:40.851198192Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:42:41.090888 dockerd[1818]: time="2026-01-23T18:42:41.089639542Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:42:42.906300 dockerd[1818]: time="2026-01-23T18:42:42.905556463Z" level=info msg="Loading containers: start." Jan 23 18:42:42.945479 kernel: Initializing XFRM netlink socket Jan 23 18:42:44.105008 systemd-networkd[1525]: docker0: Link UP Jan 23 18:42:44.118820 dockerd[1818]: time="2026-01-23T18:42:44.118560171Z" level=info msg="Loading containers: done." Jan 23 18:42:44.226177 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1659279135-merged.mount: Deactivated successfully. Jan 23 18:42:44.232746 dockerd[1818]: time="2026-01-23T18:42:44.232607606Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:42:44.234189 dockerd[1818]: time="2026-01-23T18:42:44.234061731Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:42:44.235151 dockerd[1818]: time="2026-01-23T18:42:44.234807082Z" level=info msg="Initializing buildkit" Jan 23 18:42:44.340303 dockerd[1818]: time="2026-01-23T18:42:44.339865828Z" level=info msg="Completed buildkit initialization" Jan 23 18:42:44.360060 dockerd[1818]: time="2026-01-23T18:42:44.359703256Z" level=info msg="Daemon has completed initialization" Jan 23 18:42:44.361202 dockerd[1818]: time="2026-01-23T18:42:44.360193052Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:42:44.361752 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:42:46.384767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:42:46.392005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:47.619843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:47.637782 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:42:48.055763 kubelet[2043]: E0123 18:42:48.054181 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:42:48.059682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:42:48.060032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:42:48.060894 systemd[1]: kubelet.service: Consumed 1.352s CPU time, 112.8M memory peak. Jan 23 18:42:49.293569 containerd[1599]: time="2026-01-23T18:42:49.292823040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 18:42:50.055629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449064927.mount: Deactivated successfully. Jan 23 18:42:54.373651 containerd[1599]: time="2026-01-23T18:42:54.373094627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:54.375653 containerd[1599]: time="2026-01-23T18:42:54.375608369Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28446203" Jan 23 18:42:54.378716 containerd[1599]: time="2026-01-23T18:42:54.378200650Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:54.385687 containerd[1599]: time="2026-01-23T18:42:54.385530218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:54.387280 containerd[1599]: time="2026-01-23T18:42:54.387049644Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 5.094189044s" Jan 23 18:42:54.387280 containerd[1599]: time="2026-01-23T18:42:54.387193968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 18:42:54.397286 containerd[1599]: time="2026-01-23T18:42:54.396943063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 18:42:58.162031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:42:58.191556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:59.800132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:59.865724 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:43:00.827812 kubelet[2126]: E0123 18:43:00.827083 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:43:00.836017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:43:00.836901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:43:00.838170 systemd[1]: kubelet.service: Consumed 2.405s CPU time, 111.2M memory peak. Jan 23 18:43:03.685156 containerd[1599]: time="2026-01-23T18:43:03.683976998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:03.685156 containerd[1599]: time="2026-01-23T18:43:03.685280502Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26011378" Jan 23 18:43:03.689069 containerd[1599]: time="2026-01-23T18:43:03.688926972Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:03.698100 containerd[1599]: time="2026-01-23T18:43:03.697260837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:03.703420 containerd[1599]: time="2026-01-23T18:43:03.702994287Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 9.306008587s" Jan 23 18:43:03.703608 containerd[1599]: time="2026-01-23T18:43:03.703475856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 18:43:03.731467 containerd[1599]: time="2026-01-23T18:43:03.730220551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 18:43:04.717198 update_engine[1585]: I20260123 18:43:04.712993 1585 update_attempter.cc:509] Updating boot flags... Jan 23 18:43:09.184689 containerd[1599]: time="2026-01-23T18:43:09.183128146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:09.190178 containerd[1599]: time="2026-01-23T18:43:09.187536567Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 23 18:43:09.193287 containerd[1599]: time="2026-01-23T18:43:09.193108463Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:09.201286 containerd[1599]: time="2026-01-23T18:43:09.201239821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:09.202994 containerd[1599]: time="2026-01-23T18:43:09.202582889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 5.471737165s" Jan 23 18:43:09.202994 containerd[1599]: time="2026-01-23T18:43:09.202803526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 18:43:09.206225 containerd[1599]: time="2026-01-23T18:43:09.206010622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:43:11.466245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:43:11.473218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:12.798960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:12.920865 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:43:13.328723 kubelet[2167]: E0123 18:43:13.328033 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:43:13.400040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:43:13.400779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:43:13.402863 systemd[1]: kubelet.service: Consumed 962ms CPU time, 110.8M memory peak. Jan 23 18:43:13.683013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644804284.mount: Deactivated successfully. Jan 23 18:43:15.606654 containerd[1599]: time="2026-01-23T18:43:15.606116465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:15.609724 containerd[1599]: time="2026-01-23T18:43:15.609067692Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 23 18:43:15.613164 containerd[1599]: time="2026-01-23T18:43:15.613129909Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:15.618619 containerd[1599]: time="2026-01-23T18:43:15.618145634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:15.619619 containerd[1599]: time="2026-01-23T18:43:15.619103306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 6.41298773s" Jan 23 18:43:15.619619 containerd[1599]: time="2026-01-23T18:43:15.619231835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:43:15.622167 containerd[1599]: time="2026-01-23T18:43:15.622140602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 18:43:16.395201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285890768.mount: Deactivated successfully. Jan 23 18:43:21.073293 containerd[1599]: time="2026-01-23T18:43:21.073140016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:21.078819 containerd[1599]: time="2026-01-23T18:43:21.078284622Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20581532" Jan 23 18:43:21.088273 containerd[1599]: time="2026-01-23T18:43:21.088202252Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:21.105107 containerd[1599]: time="2026-01-23T18:43:21.104778625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:21.113113 containerd[1599]: time="2026-01-23T18:43:21.110274594Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.487894305s" Jan 23 18:43:21.113113 containerd[1599]: time="2026-01-23T18:43:21.112715753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 18:43:21.120106 containerd[1599]: time="2026-01-23T18:43:21.119099491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:43:21.924058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920768894.mount: Deactivated successfully. Jan 23 18:43:21.985015 containerd[1599]: time="2026-01-23T18:43:21.984854533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:43:21.992005 containerd[1599]: time="2026-01-23T18:43:21.991955736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=501" Jan 23 18:43:21.997803 containerd[1599]: time="2026-01-23T18:43:21.997767494Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:43:22.012748 containerd[1599]: time="2026-01-23T18:43:22.012685030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:43:22.018671 containerd[1599]: time="2026-01-23T18:43:22.012818595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 893.664794ms" Jan 23 18:43:22.018671 containerd[1599]: time="2026-01-23T18:43:22.013915811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:43:22.026697 containerd[1599]: time="2026-01-23T18:43:22.026149259Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 18:43:22.997175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296179054.mount: Deactivated successfully. Jan 23 18:43:23.620243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 18:43:23.626893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:24.070881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:24.099084 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:43:24.267749 kubelet[2281]: E0123 18:43:24.266194 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:43:24.275913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:43:24.276244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:43:24.279290 systemd[1]: kubelet.service: Consumed 468ms CPU time, 110.6M memory peak. Jan 23 18:43:29.188133 containerd[1599]: time="2026-01-23T18:43:29.187159917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:29.192937 containerd[1599]: time="2026-01-23T18:43:29.192171580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 23 18:43:29.196107 containerd[1599]: time="2026-01-23T18:43:29.195963541Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:29.206659 containerd[1599]: time="2026-01-23T18:43:29.206158651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:29.208823 containerd[1599]: time="2026-01-23T18:43:29.208134138Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 7.181931611s" Jan 23 18:43:29.208823 containerd[1599]: time="2026-01-23T18:43:29.208284970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 18:43:34.376294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 18:43:34.380704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:35.087205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:35.114660 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:43:36.139694 kubelet[2341]: E0123 18:43:36.138923 2341 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:43:36.146195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:43:36.147112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:43:36.150781 systemd[1]: kubelet.service: Consumed 1.617s CPU time, 108.4M memory peak. Jan 23 18:43:37.159231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:37.159919 systemd[1]: kubelet.service: Consumed 1.617s CPU time, 108.4M memory peak. Jan 23 18:43:37.167753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:37.245294 systemd[1]: Reload requested from client PID 2358 ('systemctl') (unit session-6.scope)... Jan 23 18:43:37.245754 systemd[1]: Reloading... Jan 23 18:43:37.527985 zram_generator::config[2406]: No configuration found. Jan 23 18:43:38.035791 systemd[1]: Reloading finished in 789 ms. Jan 23 18:43:38.303254 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:43:38.304038 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:43:38.305118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:38.305180 systemd[1]: kubelet.service: Consumed 327ms CPU time, 98.7M memory peak. Jan 23 18:43:38.311269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:38.822191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:38.845030 (kubelet)[2451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:43:39.219305 kubelet[2451]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:43:39.219305 kubelet[2451]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:43:39.219305 kubelet[2451]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:43:39.219305 kubelet[2451]: I0123 18:43:39.218779 2451 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:43:39.714747 kubelet[2451]: I0123 18:43:39.714196 2451 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:43:39.714747 kubelet[2451]: I0123 18:43:39.714747 2451 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:43:39.715907 kubelet[2451]: I0123 18:43:39.715662 2451 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:43:39.845734 kubelet[2451]: E0123 18:43:39.841702 2451 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:43:39.879178 kubelet[2451]: I0123 18:43:39.878714 2451 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:43:40.020728 kubelet[2451]: I0123 18:43:40.020204 2451 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:43:40.037763 kubelet[2451]: I0123 18:43:40.037089 2451 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:43:40.039306 kubelet[2451]: I0123 18:43:40.039054 2451 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:43:40.040264 kubelet[2451]: I0123 18:43:40.039186 2451 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:43:40.040264 kubelet[2451]: I0123 18:43:40.040246 2451 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:43:40.043284 kubelet[2451]: I0123 18:43:40.040269 2451 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:43:40.043284 kubelet[2451]: I0123 18:43:40.041198 2451 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:43:40.048244 kubelet[2451]: I0123 18:43:40.048074 2451 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:43:40.049131 kubelet[2451]: I0123 18:43:40.048649 2451 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:43:40.049131 kubelet[2451]: I0123 18:43:40.048998 2451 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:43:40.049131 kubelet[2451]: I0123 18:43:40.049024 2451 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:43:40.075008 kubelet[2451]: I0123 18:43:40.074730 2451 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:43:40.078704 kubelet[2451]: I0123 18:43:40.078154 2451 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:43:40.082163 kubelet[2451]: W0123 18:43:40.081819 2451 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:43:40.111683 kubelet[2451]: E0123 18:43:40.111048 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:43:40.112972 kubelet[2451]: E0123 18:43:40.112113 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:43:40.121871 kubelet[2451]: I0123 18:43:40.121284 2451 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:43:40.129197 kubelet[2451]: I0123 18:43:40.129005 2451 server.go:1289] "Started kubelet" Jan 23 18:43:40.130284 kubelet[2451]: I0123 18:43:40.130233 2451 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:43:40.136714 kubelet[2451]: I0123 18:43:40.135827 2451 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:43:40.136779 kubelet[2451]: I0123 18:43:40.136766 2451 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:43:40.148222 kubelet[2451]: I0123 18:43:40.148202 2451 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:43:40.149737 kubelet[2451]: I0123 18:43:40.149173 2451 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:43:40.160881 kubelet[2451]: E0123 18:43:40.153074 2451 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d7067e32e1a4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:43:40.12186478 +0000 UTC m=+1.258114422,LastTimestamp:2026-01-23 18:43:40.12186478 +0000 UTC m=+1.258114422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:43:40.166743 kubelet[2451]: E0123 18:43:40.166120 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:40.166743 kubelet[2451]: I0123 18:43:40.166280 2451 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:43:40.172774 kubelet[2451]: E0123 18:43:40.172128 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Jan 23 18:43:40.176991 kubelet[2451]: I0123 18:43:40.175599 2451 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:43:40.176991 kubelet[2451]: I0123 18:43:40.177666 2451 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:43:40.178775 kubelet[2451]: E0123 18:43:40.178259 2451 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:43:40.179667 kubelet[2451]: I0123 18:43:40.178880 2451 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:43:40.183132 kubelet[2451]: E0123 18:43:40.182877 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:43:40.189086 kubelet[2451]: I0123 18:43:40.188913 2451 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:43:40.189155 kubelet[2451]: I0123 18:43:40.189118 2451 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:43:40.207719 kubelet[2451]: I0123 18:43:40.205041 2451 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:43:40.309725 kubelet[2451]: E0123 18:43:40.307968 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:40.374651 kubelet[2451]: I0123 18:43:40.373866 2451 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:43:40.374651 kubelet[2451]: I0123 18:43:40.374010 2451 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:43:40.374651 kubelet[2451]: I0123 18:43:40.374032 2451 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:43:40.375691 kubelet[2451]: E0123 18:43:40.374952 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Jan 23 18:43:40.385144 kubelet[2451]: I0123 18:43:40.384962 2451 policy_none.go:49] "None policy: Start" Jan 23 18:43:40.385144 kubelet[2451]: I0123 18:43:40.385105 2451 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:43:40.385144 kubelet[2451]: I0123 18:43:40.385121 2451 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:43:40.409978 kubelet[2451]: E0123 18:43:40.409949 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:40.415097 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:43:40.427706 kubelet[2451]: I0123 18:43:40.427224 2451 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:43:40.437232 kubelet[2451]: I0123 18:43:40.437095 2451 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:43:40.439157 kubelet[2451]: I0123 18:43:40.438819 2451 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:43:40.439157 kubelet[2451]: I0123 18:43:40.439068 2451 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:43:40.439157 kubelet[2451]: I0123 18:43:40.439081 2451 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:43:40.440129 kubelet[2451]: E0123 18:43:40.439240 2451 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:43:40.441236 kubelet[2451]: E0123 18:43:40.441012 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:43:40.464701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:43:40.478018 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:43:40.499032 kubelet[2451]: E0123 18:43:40.497861 2451 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:43:40.499032 kubelet[2451]: I0123 18:43:40.498180 2451 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:43:40.499032 kubelet[2451]: I0123 18:43:40.498197 2451 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:43:40.500701 kubelet[2451]: I0123 18:43:40.499961 2451 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:43:40.505806 kubelet[2451]: E0123 18:43:40.505184 2451 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:43:40.506045 kubelet[2451]: E0123 18:43:40.505899 2451 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:43:40.603164 systemd[1]: Created slice kubepods-burstable-pod9d23cdbff3ded24092d20a87eecf02e8.slice - libcontainer container kubepods-burstable-pod9d23cdbff3ded24092d20a87eecf02e8.slice. Jan 23 18:43:40.611283 kubelet[2451]: I0123 18:43:40.606205 2451 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:40.611283 kubelet[2451]: E0123 18:43:40.607911 2451 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 23 18:43:40.611283 kubelet[2451]: I0123 18:43:40.610711 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d23cdbff3ded24092d20a87eecf02e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d23cdbff3ded24092d20a87eecf02e8\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:40.618860 kubelet[2451]: I0123 18:43:40.611949 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d23cdbff3ded24092d20a87eecf02e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d23cdbff3ded24092d20a87eecf02e8\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:40.618860 kubelet[2451]: I0123 18:43:40.612112 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d23cdbff3ded24092d20a87eecf02e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d23cdbff3ded24092d20a87eecf02e8\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:40.635115 kubelet[2451]: E0123 18:43:40.634870 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:40.640101 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 18:43:40.653261 kubelet[2451]: E0123 18:43:40.652909 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:40.673193 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 18:43:40.699811 kubelet[2451]: E0123 18:43:40.699779 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:40.713162 kubelet[2451]: I0123 18:43:40.713134 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:40.713807 kubelet[2451]: I0123 18:43:40.713786 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:40.713923 kubelet[2451]: I0123 18:43:40.713902 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:40.714023 kubelet[2451]: I0123 18:43:40.714006 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:40.714118 kubelet[2451]: I0123 18:43:40.714101 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:40.714209 kubelet[2451]: I0123 18:43:40.714192 2451 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:43:40.778164 kubelet[2451]: E0123 18:43:40.777969 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Jan 23 18:43:40.813851 kubelet[2451]: I0123 18:43:40.813693 2451 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:40.815808 kubelet[2451]: E0123 18:43:40.815257 2451 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 23 18:43:40.933048 kubelet[2451]: E0123 18:43:40.932973 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:43:40.935282 kubelet[2451]: E0123 18:43:40.934800 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:43:40.936796 kubelet[2451]: E0123 18:43:40.936275 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:40.939546 containerd[1599]: time="2026-01-23T18:43:40.939253186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d23cdbff3ded24092d20a87eecf02e8,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:40.956209 kubelet[2451]: E0123 18:43:40.955302 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:40.958661 containerd[1599]: time="2026-01-23T18:43:40.957881106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:41.001716 kubelet[2451]: E0123 18:43:41.001140 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:41.004110 containerd[1599]: time="2026-01-23T18:43:41.003304617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:41.097140 containerd[1599]: time="2026-01-23T18:43:41.096155298Z" level=info msg="connecting to shim 5f4a23ae83f341db273b7effa3d7cabf03f65abe42fe67129bf445cc6a43d4d4" address="unix:///run/containerd/s/837154dd883d510072e33dab0cd406c7b3ff05142584ab38478e3629fa7c5706" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:41.123835 containerd[1599]: time="2026-01-23T18:43:41.123074664Z" level=info msg="connecting to shim 95823c672a95e7620a74c646b75989000f8686863131af9d993e02841c125de5" address="unix:///run/containerd/s/e0659a12de9ac8e4d0a76159f97f625795a24534da4e74ee122baeb04a960f55" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:41.192984 containerd[1599]: time="2026-01-23T18:43:41.192208665Z" level=info msg="connecting to shim d0be59c2aaa32934f0abc553f29a4e5bf4e31da06e4199de7791c4a9d4ec4bfa" address="unix:///run/containerd/s/d3616da9ae0ab616b003d04927e144bf4e41eb8115b75e8272986477e7b8a923" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:41.226067 kubelet[2451]: I0123 18:43:41.225995 2451 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:41.227265 kubelet[2451]: E0123 18:43:41.226845 2451 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 23 18:43:41.243209 systemd[1]: Started cri-containerd-5f4a23ae83f341db273b7effa3d7cabf03f65abe42fe67129bf445cc6a43d4d4.scope - libcontainer container 5f4a23ae83f341db273b7effa3d7cabf03f65abe42fe67129bf445cc6a43d4d4. Jan 23 18:43:41.277745 systemd[1]: Started cri-containerd-95823c672a95e7620a74c646b75989000f8686863131af9d993e02841c125de5.scope - libcontainer container 95823c672a95e7620a74c646b75989000f8686863131af9d993e02841c125de5. Jan 23 18:43:41.347056 systemd[1]: Started cri-containerd-d0be59c2aaa32934f0abc553f29a4e5bf4e31da06e4199de7791c4a9d4ec4bfa.scope - libcontainer container d0be59c2aaa32934f0abc553f29a4e5bf4e31da06e4199de7791c4a9d4ec4bfa. Jan 23 18:43:41.449985 kubelet[2451]: E0123 18:43:41.447777 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:43:41.479245 containerd[1599]: time="2026-01-23T18:43:41.479158938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f4a23ae83f341db273b7effa3d7cabf03f65abe42fe67129bf445cc6a43d4d4\"" Jan 23 18:43:41.493058 kubelet[2451]: E0123 18:43:41.492904 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:41.518833 containerd[1599]: time="2026-01-23T18:43:41.518221768Z" level=info msg="CreateContainer within sandbox \"5f4a23ae83f341db273b7effa3d7cabf03f65abe42fe67129bf445cc6a43d4d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:43:41.545955 kubelet[2451]: E0123 18:43:41.545780 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:43:41.557767 containerd[1599]: time="2026-01-23T18:43:41.555908799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d23cdbff3ded24092d20a87eecf02e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"95823c672a95e7620a74c646b75989000f8686863131af9d993e02841c125de5\"" Jan 23 18:43:41.560821 kubelet[2451]: E0123 18:43:41.560755 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:41.576279 containerd[1599]: time="2026-01-23T18:43:41.575992188Z" level=info msg="Container 422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:41.580755 containerd[1599]: time="2026-01-23T18:43:41.580110789Z" level=info msg="CreateContainer within sandbox \"95823c672a95e7620a74c646b75989000f8686863131af9d993e02841c125de5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:43:41.581871 kubelet[2451]: E0123 18:43:41.581279 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Jan 23 18:43:41.623791 containerd[1599]: time="2026-01-23T18:43:41.623173083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0be59c2aaa32934f0abc553f29a4e5bf4e31da06e4199de7791c4a9d4ec4bfa\"" Jan 23 18:43:41.626712 containerd[1599]: time="2026-01-23T18:43:41.626106889Z" level=info msg="CreateContainer within sandbox \"5f4a23ae83f341db273b7effa3d7cabf03f65abe42fe67129bf445cc6a43d4d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3\"" Jan 23 18:43:41.634040 kubelet[2451]: E0123 18:43:41.633874 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:41.634130 containerd[1599]: time="2026-01-23T18:43:41.633882425Z" level=info msg="StartContainer for \"422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3\"" Jan 23 18:43:41.635083 containerd[1599]: time="2026-01-23T18:43:41.634942541Z" level=info msg="Container 910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:41.638767 containerd[1599]: time="2026-01-23T18:43:41.637913779Z" level=info msg="connecting to shim 422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3" address="unix:///run/containerd/s/837154dd883d510072e33dab0cd406c7b3ff05142584ab38478e3629fa7c5706" protocol=ttrpc version=3 Jan 23 18:43:41.654113 containerd[1599]: time="2026-01-23T18:43:41.653932920Z" level=info msg="CreateContainer within sandbox \"d0be59c2aaa32934f0abc553f29a4e5bf4e31da06e4199de7791c4a9d4ec4bfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:43:41.673130 containerd[1599]: time="2026-01-23T18:43:41.672956852Z" level=info msg="CreateContainer within sandbox \"95823c672a95e7620a74c646b75989000f8686863131af9d993e02841c125de5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1\"" Jan 23 18:43:41.675115 containerd[1599]: time="2026-01-23T18:43:41.674844643Z" level=info msg="StartContainer for \"910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1\"" Jan 23 18:43:41.677907 containerd[1599]: time="2026-01-23T18:43:41.676929010Z" level=info msg="connecting to shim 910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1" address="unix:///run/containerd/s/e0659a12de9ac8e4d0a76159f97f625795a24534da4e74ee122baeb04a960f55" protocol=ttrpc version=3 Jan 23 18:43:41.707275 containerd[1599]: time="2026-01-23T18:43:41.706070377Z" level=info msg="Container 4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:41.725844 systemd[1]: Started cri-containerd-422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3.scope - libcontainer container 422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3. Jan 23 18:43:41.769228 containerd[1599]: time="2026-01-23T18:43:41.769179181Z" level=info msg="CreateContainer within sandbox \"d0be59c2aaa32934f0abc553f29a4e5bf4e31da06e4199de7791c4a9d4ec4bfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc\"" Jan 23 18:43:41.776906 systemd[1]: Started cri-containerd-910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1.scope - libcontainer container 910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1. Jan 23 18:43:41.783734 containerd[1599]: time="2026-01-23T18:43:41.782762928Z" level=info msg="StartContainer for \"4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc\"" Jan 23 18:43:41.790144 containerd[1599]: time="2026-01-23T18:43:41.789767600Z" level=info msg="connecting to shim 4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc" address="unix:///run/containerd/s/d3616da9ae0ab616b003d04927e144bf4e41eb8115b75e8272986477e7b8a923" protocol=ttrpc version=3 Jan 23 18:43:41.909860 systemd[1]: Started cri-containerd-4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc.scope - libcontainer container 4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc. Jan 23 18:43:41.971101 containerd[1599]: time="2026-01-23T18:43:41.969772468Z" level=info msg="StartContainer for \"422f0b745d75122a8ae1d1f10ec9e4ab0d0134c88d646362475894846a497dd3\" returns successfully" Jan 23 18:43:41.979276 kubelet[2451]: E0123 18:43:41.978644 2451 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:43:42.034222 containerd[1599]: time="2026-01-23T18:43:42.033901023Z" level=info msg="StartContainer for \"910a0926d63c816429dfe16bf5fefc7abff00457f136df3846d4295838a63ac1\" returns successfully" Jan 23 18:43:42.040839 kubelet[2451]: I0123 18:43:42.040813 2451 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:42.043831 kubelet[2451]: E0123 18:43:42.043796 2451 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 23 18:43:42.160029 containerd[1599]: time="2026-01-23T18:43:42.158887683Z" level=info msg="StartContainer for \"4d9b618ad80f87dc45e2e08de8aa970372aa27886595cd2f7489df7f291c0fdc\" returns successfully" Jan 23 18:43:42.497287 kubelet[2451]: E0123 18:43:42.497006 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:42.497287 kubelet[2451]: E0123 18:43:42.497241 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:42.512704 kubelet[2451]: E0123 18:43:42.508838 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:42.512817 kubelet[2451]: E0123 18:43:42.512777 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:42.528856 kubelet[2451]: E0123 18:43:42.528191 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:42.528979 kubelet[2451]: E0123 18:43:42.528896 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:43.573922 kubelet[2451]: E0123 18:43:43.572937 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:43.585932 kubelet[2451]: E0123 18:43:43.585219 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:43.586037 kubelet[2451]: E0123 18:43:43.586026 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:43.586822 kubelet[2451]: E0123 18:43:43.586293 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:43.667158 kubelet[2451]: I0123 18:43:43.666204 2451 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:45.239299 kubelet[2451]: E0123 18:43:45.238067 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:45.259678 kubelet[2451]: E0123 18:43:45.245967 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:50.152142 kubelet[2451]: E0123 18:43:50.148032 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:50.152142 kubelet[2451]: E0123 18:43:50.150060 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:50.194278 kubelet[2451]: E0123 18:43:50.194053 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:50.194888 kubelet[2451]: E0123 18:43:50.194791 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:50.510006 kubelet[2451]: E0123 18:43:50.507927 2451 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:43:52.762894 kubelet[2451]: E0123 18:43:52.761878 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:43:52.767231 kubelet[2451]: E0123 18:43:52.766789 2451 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188d7067e32e1a4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:43:40.12186478 +0000 UTC m=+1.258114422,LastTimestamp:2026-01-23 18:43:40.12186478 +0000 UTC m=+1.258114422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:43:53.197015 kubelet[2451]: E0123 18:43:53.193179 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 23 18:43:53.299252 kubelet[2451]: E0123 18:43:53.298019 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:43:53.681089 kubelet[2451]: E0123 18:43:53.680290 2451 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 18:43:53.828999 kubelet[2451]: E0123 18:43:53.827277 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:43:53.989225 kubelet[2451]: E0123 18:43:53.837739 2451 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:43:56.330222 kubelet[2451]: E0123 18:43:56.329111 2451 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:43:56.897121 kubelet[2451]: I0123 18:43:56.896970 2451 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:58.363189 kubelet[2451]: E0123 18:43:58.361887 2451 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 18:43:58.455888 kubelet[2451]: I0123 18:43:58.455034 2451 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:43:58.455888 kubelet[2451]: E0123 18:43:58.455083 2451 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 18:43:58.539125 kubelet[2451]: E0123 18:43:58.538841 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:58.644062 kubelet[2451]: E0123 18:43:58.641194 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:58.764126 kubelet[2451]: E0123 18:43:58.762947 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:58.875746 kubelet[2451]: E0123 18:43:58.874246 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:58.970052 kubelet[2451]: E0123 18:43:58.945109 2451 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:58.972718 kubelet[2451]: E0123 18:43:58.972281 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:58.976174 kubelet[2451]: E0123 18:43:58.975165 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.078155 kubelet[2451]: E0123 18:43:59.077020 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.178278 kubelet[2451]: E0123 18:43:59.178082 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.280901 kubelet[2451]: E0123 18:43:59.278925 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.380850 kubelet[2451]: E0123 18:43:59.379923 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.480298 kubelet[2451]: E0123 18:43:59.480244 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.582797 kubelet[2451]: E0123 18:43:59.581937 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.683003 kubelet[2451]: E0123 18:43:59.682953 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.794144 kubelet[2451]: E0123 18:43:59.791174 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:59.896261 kubelet[2451]: E0123 18:43:59.896013 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:44:00.006118 kubelet[2451]: E0123 18:43:59.999261 2451 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:44:00.080142 kubelet[2451]: I0123 18:44:00.078911 2451 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:44:00.132713 kubelet[2451]: I0123 18:44:00.132070 2451 apiserver.go:52] "Watching apiserver" Jan 23 18:44:00.187188 kubelet[2451]: I0123 18:44:00.181176 2451 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:44:00.192730 kubelet[2451]: I0123 18:44:00.191809 2451 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:01.297877 kubelet[2451]: E0123 18:44:01.297194 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:01.305995 kubelet[2451]: I0123 18:44:01.305047 2451 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:01.309821 kubelet[2451]: E0123 18:44:01.309166 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:01.324234 kubelet[2451]: E0123 18:44:01.322979 2451 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:01.324234 kubelet[2451]: I0123 18:44:01.323008 2451 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:44:01.335811 kubelet[2451]: E0123 18:44:01.334750 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:02.037742 kubelet[2451]: E0123 18:44:02.037165 2451 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:04.103108 systemd[1]: Reload requested from client PID 2737 ('systemctl') (unit session-6.scope)... Jan 23 18:44:04.103263 systemd[1]: Reloading... Jan 23 18:44:04.438112 zram_generator::config[2786]: No configuration found. Jan 23 18:44:05.220950 systemd[1]: Reloading finished in 1115 ms. Jan 23 18:44:05.324963 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:44:05.475257 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:44:05.477092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:44:05.477272 systemd[1]: kubelet.service: Consumed 6.842s CPU time, 132M memory peak. Jan 23 18:44:05.483875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:44:06.168196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:44:06.195979 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:44:06.598936 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:44:06.598936 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:44:06.598936 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:44:06.598936 kubelet[2828]: I0123 18:44:06.596691 2828 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:44:06.682296 kubelet[2828]: I0123 18:44:06.679214 2828 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:44:06.682296 kubelet[2828]: I0123 18:44:06.679854 2828 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:44:06.686264 kubelet[2828]: I0123 18:44:06.685186 2828 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:44:06.705783 kubelet[2828]: I0123 18:44:06.705204 2828 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:44:06.729948 kubelet[2828]: I0123 18:44:06.729129 2828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:44:06.767134 kubelet[2828]: I0123 18:44:06.766151 2828 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:44:06.786949 kubelet[2828]: I0123 18:44:06.783955 2828 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:44:06.786949 kubelet[2828]: I0123 18:44:06.785100 2828 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:44:06.786949 kubelet[2828]: I0123 18:44:06.785135 2828 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:44:06.786949 kubelet[2828]: I0123 18:44:06.785925 2828 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:44:06.788201 kubelet[2828]: I0123 18:44:06.785935 2828 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:44:06.788201 kubelet[2828]: I0123 18:44:06.785993 2828 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:44:06.788201 kubelet[2828]: I0123 18:44:06.786177 2828 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:44:06.788201 kubelet[2828]: I0123 18:44:06.786194 2828 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:44:06.788201 kubelet[2828]: I0123 18:44:06.786221 2828 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:44:06.788201 kubelet[2828]: I0123 18:44:06.786241 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:44:06.798258 kubelet[2828]: I0123 18:44:06.798163 2828 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:44:06.799869 kubelet[2828]: I0123 18:44:06.799183 2828 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:44:06.857853 kubelet[2828]: I0123 18:44:06.856921 2828 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:44:06.857853 kubelet[2828]: I0123 18:44:06.857126 2828 server.go:1289] "Started kubelet" Jan 23 18:44:06.859839 kubelet[2828]: I0123 18:44:06.857294 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:44:06.873830 kubelet[2828]: I0123 18:44:06.873742 2828 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:44:06.873830 kubelet[2828]: I0123 18:44:06.862707 2828 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:44:06.878299 kubelet[2828]: I0123 18:44:06.878280 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:44:06.884582 kubelet[2828]: I0123 18:44:06.883763 2828 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:44:06.887978 kubelet[2828]: I0123 18:44:06.887083 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:44:06.900798 kubelet[2828]: I0123 18:44:06.899989 2828 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:44:06.902241 kubelet[2828]: I0123 18:44:06.902222 2828 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:44:06.904996 kubelet[2828]: I0123 18:44:06.904978 2828 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:44:06.908817 kubelet[2828]: I0123 18:44:06.907308 2828 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:44:06.909076 kubelet[2828]: I0123 18:44:06.909050 2828 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:44:06.911271 kubelet[2828]: E0123 18:44:06.911248 2828 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:44:06.918987 kubelet[2828]: I0123 18:44:06.918194 2828 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:44:07.177870 kubelet[2828]: I0123 18:44:07.162202 2828 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:44:07.201251 kubelet[2828]: I0123 18:44:07.198072 2828 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:44:07.225940 kubelet[2828]: I0123 18:44:07.221949 2828 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:44:07.241215 kubelet[2828]: I0123 18:44:07.226928 2828 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:44:07.241215 kubelet[2828]: I0123 18:44:07.226950 2828 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:44:07.241215 kubelet[2828]: E0123 18:44:07.227069 2828 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:44:07.337213 kubelet[2828]: E0123 18:44:07.328799 2828 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:44:07.530797 kubelet[2828]: E0123 18:44:07.529093 2828 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:44:07.612091 kubelet[2828]: I0123 18:44:07.611102 2828 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:44:07.612091 kubelet[2828]: I0123 18:44:07.611193 2828 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:44:07.612091 kubelet[2828]: I0123 18:44:07.611792 2828 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:44:07.612091 kubelet[2828]: I0123 18:44:07.612209 2828 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:44:07.625152 kubelet[2828]: I0123 18:44:07.612224 2828 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:44:07.625152 kubelet[2828]: I0123 18:44:07.612245 2828 policy_none.go:49] "None policy: Start" Jan 23 18:44:07.625152 kubelet[2828]: I0123 18:44:07.612258 2828 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:44:07.625152 kubelet[2828]: I0123 18:44:07.612276 2828 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:44:07.625152 kubelet[2828]: I0123 18:44:07.612856 2828 state_mem.go:75] "Updated machine memory state" Jan 23 18:44:07.653940 kubelet[2828]: E0123 18:44:07.653281 2828 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:44:07.672946 kubelet[2828]: I0123 18:44:07.670054 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:44:07.675816 kubelet[2828]: I0123 18:44:07.671004 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:44:07.675816 kubelet[2828]: I0123 18:44:07.674937 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:44:07.699247 kubelet[2828]: E0123 18:44:07.698611 2828 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:44:07.796286 kubelet[2828]: I0123 18:44:07.795098 2828 apiserver.go:52] "Watching apiserver" Jan 23 18:44:07.871129 kubelet[2828]: I0123 18:44:07.870991 2828 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:44:07.961259 kubelet[2828]: I0123 18:44:07.960047 2828 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 18:44:07.963245 kubelet[2828]: I0123 18:44:07.963218 2828 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:44:08.009700 kubelet[2828]: I0123 18:44:08.008171 2828 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:44:08.036708 kubelet[2828]: I0123 18:44:08.035037 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:44:08.036708 kubelet[2828]: I0123 18:44:08.035088 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d23cdbff3ded24092d20a87eecf02e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d23cdbff3ded24092d20a87eecf02e8\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:44:08.036708 kubelet[2828]: I0123 18:44:08.035116 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d23cdbff3ded24092d20a87eecf02e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d23cdbff3ded24092d20a87eecf02e8\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:44:08.036708 kubelet[2828]: I0123 18:44:08.035139 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d23cdbff3ded24092d20a87eecf02e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d23cdbff3ded24092d20a87eecf02e8\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:44:08.036708 kubelet[2828]: I0123 18:44:08.035163 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:08.036971 kubelet[2828]: I0123 18:44:08.035183 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:08.036971 kubelet[2828]: I0123 18:44:08.035207 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:08.036971 kubelet[2828]: I0123 18:44:08.035230 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:08.036971 kubelet[2828]: I0123 18:44:08.035831 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:44:08.238844 kubelet[2828]: E0123 18:44:08.236018 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:08.242212 kubelet[2828]: E0123 18:44:08.241006 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:08.242939 kubelet[2828]: E0123 18:44:08.242863 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:08.444998 kubelet[2828]: I0123 18:44:08.439289 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.439123739 podStartE2EDuration="7.439123739s" podCreationTimestamp="2026-01-23 18:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:44:08.434884363 +0000 UTC m=+2.173858688" watchObservedRunningTime="2026-01-23 18:44:08.439123739 +0000 UTC m=+2.178098045" Jan 23 18:44:08.517012 kubelet[2828]: E0123 18:44:08.515880 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:08.523561 kubelet[2828]: E0123 18:44:08.518209 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:08.820057 kubelet[2828]: I0123 18:44:08.818038 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.817965087 podStartE2EDuration="7.817965087s" podCreationTimestamp="2026-01-23 18:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:44:08.712842176 +0000 UTC m=+2.451816522" watchObservedRunningTime="2026-01-23 18:44:08.817965087 +0000 UTC m=+2.556939402" Jan 23 18:44:08.820057 kubelet[2828]: I0123 18:44:08.818150 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.8181426179999995 podStartE2EDuration="7.818142618s" podCreationTimestamp="2026-01-23 18:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:44:08.818139337 +0000 UTC m=+2.557113642" watchObservedRunningTime="2026-01-23 18:44:08.818142618 +0000 UTC m=+2.557116963" Jan 23 18:44:09.232270 kubelet[2828]: E0123 18:44:09.230895 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:09.538028 kubelet[2828]: E0123 18:44:09.537161 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:09.538028 kubelet[2828]: E0123 18:44:09.537797 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:10.543824 kubelet[2828]: E0123 18:44:10.543161 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:11.136134 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 23 18:44:11.158146 sshd[1777]: Connection closed by 10.0.0.1 port 34140 Jan 23 18:44:11.166975 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:11.185125 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:34140.service: Deactivated successfully. Jan 23 18:44:11.193242 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:44:11.194124 systemd[1]: session-6.scope: Consumed 19.077s CPU time, 219.2M memory peak. Jan 23 18:44:11.204052 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:44:11.210196 systemd-logind[1584]: Removed session 6. Jan 23 18:44:11.746796 kubelet[2828]: I0123 18:44:11.746205 2828 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:44:11.749183 containerd[1599]: time="2026-01-23T18:44:11.749098578Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:44:11.755753 kubelet[2828]: I0123 18:44:11.755076 2828 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:44:12.657264 kubelet[2828]: I0123 18:44:12.656284 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8706530b-93af-4669-ac0b-9fdb3eb03874-lib-modules\") pod \"kube-proxy-pjrpt\" (UID: \"8706530b-93af-4669-ac0b-9fdb3eb03874\") " pod="kube-system/kube-proxy-pjrpt" Jan 23 18:44:12.657264 kubelet[2828]: I0123 18:44:12.656861 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8706530b-93af-4669-ac0b-9fdb3eb03874-kube-proxy\") pod \"kube-proxy-pjrpt\" (UID: \"8706530b-93af-4669-ac0b-9fdb3eb03874\") " pod="kube-system/kube-proxy-pjrpt" Jan 23 18:44:12.657264 kubelet[2828]: I0123 18:44:12.656898 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxz25\" (UniqueName: \"kubernetes.io/projected/8706530b-93af-4669-ac0b-9fdb3eb03874-kube-api-access-qxz25\") pod \"kube-proxy-pjrpt\" (UID: \"8706530b-93af-4669-ac0b-9fdb3eb03874\") " pod="kube-system/kube-proxy-pjrpt" Jan 23 18:44:12.657264 kubelet[2828]: I0123 18:44:12.656934 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8706530b-93af-4669-ac0b-9fdb3eb03874-xtables-lock\") pod \"kube-proxy-pjrpt\" (UID: \"8706530b-93af-4669-ac0b-9fdb3eb03874\") " pod="kube-system/kube-proxy-pjrpt" Jan 23 18:44:12.743893 systemd[1]: Created slice kubepods-besteffort-pod8706530b_93af_4669_ac0b_9fdb3eb03874.slice - libcontainer container kubepods-besteffort-pod8706530b_93af_4669_ac0b_9fdb3eb03874.slice. Jan 23 18:44:12.803153 systemd[1]: Created slice kubepods-burstable-podb868d76d_cae1_41b2_b243_6f6b0e4de68e.slice - libcontainer container kubepods-burstable-podb868d76d_cae1_41b2_b243_6f6b0e4de68e.slice. Jan 23 18:44:12.874265 kubelet[2828]: I0123 18:44:12.874082 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b868d76d-cae1-41b2-b243-6f6b0e4de68e-flannel-cfg\") pod \"kube-flannel-ds-tq6tm\" (UID: \"b868d76d-cae1-41b2-b243-6f6b0e4de68e\") " pod="kube-flannel/kube-flannel-ds-tq6tm" Jan 23 18:44:12.875971 kubelet[2828]: I0123 18:44:12.874264 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frrm\" (UniqueName: \"kubernetes.io/projected/b868d76d-cae1-41b2-b243-6f6b0e4de68e-kube-api-access-9frrm\") pod \"kube-flannel-ds-tq6tm\" (UID: \"b868d76d-cae1-41b2-b243-6f6b0e4de68e\") " pod="kube-flannel/kube-flannel-ds-tq6tm" Jan 23 18:44:12.877957 kubelet[2828]: I0123 18:44:12.877145 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b868d76d-cae1-41b2-b243-6f6b0e4de68e-cni\") pod \"kube-flannel-ds-tq6tm\" (UID: \"b868d76d-cae1-41b2-b243-6f6b0e4de68e\") " pod="kube-flannel/kube-flannel-ds-tq6tm" Jan 23 18:44:12.877957 kubelet[2828]: I0123 18:44:12.877306 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b868d76d-cae1-41b2-b243-6f6b0e4de68e-xtables-lock\") pod \"kube-flannel-ds-tq6tm\" (UID: \"b868d76d-cae1-41b2-b243-6f6b0e4de68e\") " pod="kube-flannel/kube-flannel-ds-tq6tm" Jan 23 18:44:12.877957 kubelet[2828]: I0123 18:44:12.877817 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b868d76d-cae1-41b2-b243-6f6b0e4de68e-run\") pod \"kube-flannel-ds-tq6tm\" (UID: \"b868d76d-cae1-41b2-b243-6f6b0e4de68e\") " pod="kube-flannel/kube-flannel-ds-tq6tm" Jan 23 18:44:12.877957 kubelet[2828]: I0123 18:44:12.877848 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b868d76d-cae1-41b2-b243-6f6b0e4de68e-cni-plugin\") pod \"kube-flannel-ds-tq6tm\" (UID: \"b868d76d-cae1-41b2-b243-6f6b0e4de68e\") " pod="kube-flannel/kube-flannel-ds-tq6tm" Jan 23 18:44:13.119148 kubelet[2828]: E0123 18:44:13.119004 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:13.127863 containerd[1599]: time="2026-01-23T18:44:13.124133911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjrpt,Uid:8706530b-93af-4669-ac0b-9fdb3eb03874,Namespace:kube-system,Attempt:0,}" Jan 23 18:44:13.137791 kubelet[2828]: E0123 18:44:13.137044 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:13.142250 containerd[1599]: time="2026-01-23T18:44:13.142214457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tq6tm,Uid:b868d76d-cae1-41b2-b243-6f6b0e4de68e,Namespace:kube-flannel,Attempt:0,}" Jan 23 18:44:13.446921 containerd[1599]: time="2026-01-23T18:44:13.445883178Z" level=info msg="connecting to shim e5967759d2591ba66feb4b234aa5d8af3c26da4697b6060f9c11bc3a800f7e53" address="unix:///run/containerd/s/a6ffa2c9bd1387f15d000e59b4602d4beffee6baec06eda209fe85464ffeea05" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:44:13.467011 containerd[1599]: time="2026-01-23T18:44:13.466954282Z" level=info msg="connecting to shim 4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5" address="unix:///run/containerd/s/46cb7f60263bec529656eb96ce71c3aa5eed8a8d6526bbee608003ec2e665aae" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:44:13.760058 systemd[1]: Started cri-containerd-4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5.scope - libcontainer container 4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5. Jan 23 18:44:13.772915 systemd[1]: Started cri-containerd-e5967759d2591ba66feb4b234aa5d8af3c26da4697b6060f9c11bc3a800f7e53.scope - libcontainer container e5967759d2591ba66feb4b234aa5d8af3c26da4697b6060f9c11bc3a800f7e53. Jan 23 18:44:13.980044 containerd[1599]: time="2026-01-23T18:44:13.977821342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjrpt,Uid:8706530b-93af-4669-ac0b-9fdb3eb03874,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5967759d2591ba66feb4b234aa5d8af3c26da4697b6060f9c11bc3a800f7e53\"" Jan 23 18:44:13.980986 kubelet[2828]: E0123 18:44:13.980225 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:13.996826 containerd[1599]: time="2026-01-23T18:44:13.994156774Z" level=info msg="CreateContainer within sandbox \"e5967759d2591ba66feb4b234aa5d8af3c26da4697b6060f9c11bc3a800f7e53\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:44:14.081821 containerd[1599]: time="2026-01-23T18:44:14.079994752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tq6tm,Uid:b868d76d-cae1-41b2-b243-6f6b0e4de68e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\"" Jan 23 18:44:14.088845 containerd[1599]: time="2026-01-23T18:44:14.087893013Z" level=info msg="Container 3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:44:14.093802 kubelet[2828]: E0123 18:44:14.092850 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:14.095820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767235999.mount: Deactivated successfully. Jan 23 18:44:14.103053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078151143.mount: Deactivated successfully. Jan 23 18:44:14.115779 containerd[1599]: time="2026-01-23T18:44:14.114833359Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 18:44:14.144148 containerd[1599]: time="2026-01-23T18:44:14.142762328Z" level=info msg="CreateContainer within sandbox \"e5967759d2591ba66feb4b234aa5d8af3c26da4697b6060f9c11bc3a800f7e53\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592\"" Jan 23 18:44:14.151093 containerd[1599]: time="2026-01-23T18:44:14.150180100Z" level=info msg="StartContainer for \"3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592\"" Jan 23 18:44:14.157613 containerd[1599]: time="2026-01-23T18:44:14.157000056Z" level=info msg="connecting to shim 3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592" address="unix:///run/containerd/s/a6ffa2c9bd1387f15d000e59b4602d4beffee6baec06eda209fe85464ffeea05" protocol=ttrpc version=3 Jan 23 18:44:14.287775 systemd[1]: Started cri-containerd-3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592.scope - libcontainer container 3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592. Jan 23 18:44:14.896892 containerd[1599]: time="2026-01-23T18:44:14.896837814Z" level=info msg="StartContainer for \"3b656f76c82cd98aea1eeb687d7b9e006c4091c1e33eea75970ec8e3e45a7592\" returns successfully" Jan 23 18:44:15.632304 kubelet[2828]: E0123 18:44:15.632251 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:15.686818 kubelet[2828]: I0123 18:44:15.686092 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjrpt" podStartSLOduration=3.6860734600000002 podStartE2EDuration="3.68607346s" podCreationTimestamp="2026-01-23 18:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:44:15.678982994 +0000 UTC m=+9.417957299" watchObservedRunningTime="2026-01-23 18:44:15.68607346 +0000 UTC m=+9.425047765" Jan 23 18:44:16.637750 kubelet[2828]: E0123 18:44:16.637584 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:16.990203 kubelet[2828]: E0123 18:44:16.989607 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:17.079767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227404274.mount: Deactivated successfully. Jan 23 18:44:17.217948 containerd[1599]: time="2026-01-23T18:44:17.217279682Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:44:17.221917 containerd[1599]: time="2026-01-23T18:44:17.221621643Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4850109" Jan 23 18:44:17.224835 containerd[1599]: time="2026-01-23T18:44:17.224709547Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:44:17.230266 containerd[1599]: time="2026-01-23T18:44:17.229976084Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:44:17.231234 containerd[1599]: time="2026-01-23T18:44:17.231037390Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.116157514s" Jan 23 18:44:17.231234 containerd[1599]: time="2026-01-23T18:44:17.231142616Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 23 18:44:17.245658 containerd[1599]: time="2026-01-23T18:44:17.244676378Z" level=info msg="CreateContainer within sandbox \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 18:44:17.278690 containerd[1599]: time="2026-01-23T18:44:17.278121092Z" level=info msg="Container 593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:44:17.291177 containerd[1599]: time="2026-01-23T18:44:17.290753070Z" level=info msg="CreateContainer within sandbox \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b\"" Jan 23 18:44:17.292568 containerd[1599]: time="2026-01-23T18:44:17.292536012Z" level=info msg="StartContainer for \"593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b\"" Jan 23 18:44:17.294789 containerd[1599]: time="2026-01-23T18:44:17.294756390Z" level=info msg="connecting to shim 593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b" address="unix:///run/containerd/s/46cb7f60263bec529656eb96ce71c3aa5eed8a8d6526bbee608003ec2e665aae" protocol=ttrpc version=3 Jan 23 18:44:17.351725 systemd[1]: Started cri-containerd-593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b.scope - libcontainer container 593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b. Jan 23 18:44:17.525285 systemd[1]: cri-containerd-593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b.scope: Deactivated successfully. Jan 23 18:44:17.526195 systemd[1]: cri-containerd-593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b.scope: Consumed 60ms CPU time, 4.4M memory peak, 1M read from disk. Jan 23 18:44:17.537478 containerd[1599]: time="2026-01-23T18:44:17.537225708Z" level=info msg="received container exit event container_id:\"593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b\" id:\"593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b\" pid:3181 exited_at:{seconds:1769193857 nanos:530266150}" Jan 23 18:44:17.552587 containerd[1599]: time="2026-01-23T18:44:17.545252475Z" level=info msg="StartContainer for \"593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b\" returns successfully" Jan 23 18:44:17.657785 kubelet[2828]: E0123 18:44:17.657681 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:17.659784 kubelet[2828]: E0123 18:44:17.659576 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:17.660741 containerd[1599]: time="2026-01-23T18:44:17.660157918Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 18:44:17.707232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593db74796223b9f7881c2c419077cb47facde5db9dc3502731a08b21234720b-rootfs.mount: Deactivated successfully. Jan 23 18:44:19.277153 kubelet[2828]: E0123 18:44:19.276906 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:23.670710 containerd[1599]: time="2026-01-23T18:44:23.670038119Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:44:23.673870 containerd[1599]: time="2026-01-23T18:44:23.673741074Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=26948848" Jan 23 18:44:23.678519 containerd[1599]: time="2026-01-23T18:44:23.677915236Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:44:23.688166 containerd[1599]: time="2026-01-23T18:44:23.687981324Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:44:23.690818 containerd[1599]: time="2026-01-23T18:44:23.690706724Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 6.030438009s" Jan 23 18:44:23.690894 containerd[1599]: time="2026-01-23T18:44:23.690829202Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 23 18:44:23.705710 containerd[1599]: time="2026-01-23T18:44:23.705030473Z" level=info msg="CreateContainer within sandbox \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:44:23.732672 containerd[1599]: time="2026-01-23T18:44:23.732622475Z" level=info msg="Container 573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:44:23.754069 containerd[1599]: time="2026-01-23T18:44:23.753940080Z" level=info msg="CreateContainer within sandbox \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2\"" Jan 23 18:44:23.755916 containerd[1599]: time="2026-01-23T18:44:23.755876497Z" level=info msg="StartContainer for \"573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2\"" Jan 23 18:44:23.761611 containerd[1599]: time="2026-01-23T18:44:23.761578665Z" level=info msg="connecting to shim 573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2" address="unix:///run/containerd/s/46cb7f60263bec529656eb96ce71c3aa5eed8a8d6526bbee608003ec2e665aae" protocol=ttrpc version=3 Jan 23 18:44:23.830772 systemd[1]: Started cri-containerd-573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2.scope - libcontainer container 573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2. Jan 23 18:44:23.950662 systemd[1]: cri-containerd-573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2.scope: Deactivated successfully. Jan 23 18:44:23.958975 containerd[1599]: time="2026-01-23T18:44:23.957243644Z" level=info msg="received container exit event container_id:\"573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2\" id:\"573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2\" pid:3257 exited_at:{seconds:1769193863 nanos:951978820}" Jan 23 18:44:23.964740 containerd[1599]: time="2026-01-23T18:44:23.964583355Z" level=info msg="StartContainer for \"573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2\" returns successfully" Jan 23 18:44:24.008776 kubelet[2828]: I0123 18:44:24.008243 2828 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:44:24.054725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-573535d3e7bd07cab30ba58652ed6444d5eda356e76031404ac34b7b8ccbcdd2-rootfs.mount: Deactivated successfully. Jan 23 18:44:24.196830 kubelet[2828]: I0123 18:44:24.196674 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae5012e5-a967-4430-82ac-dc7784837d1d-config-volume\") pod \"coredns-674b8bbfcf-p8dnp\" (UID: \"ae5012e5-a967-4430-82ac-dc7784837d1d\") " pod="kube-system/coredns-674b8bbfcf-p8dnp" Jan 23 18:44:24.196830 kubelet[2828]: I0123 18:44:24.196733 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cl7x\" (UniqueName: \"kubernetes.io/projected/ae5012e5-a967-4430-82ac-dc7784837d1d-kube-api-access-6cl7x\") pod \"coredns-674b8bbfcf-p8dnp\" (UID: \"ae5012e5-a967-4430-82ac-dc7784837d1d\") " pod="kube-system/coredns-674b8bbfcf-p8dnp" Jan 23 18:44:24.221048 systemd[1]: Created slice kubepods-burstable-podae5012e5_a967_4430_82ac_dc7784837d1d.slice - libcontainer container kubepods-burstable-podae5012e5_a967_4430_82ac_dc7784837d1d.slice. Jan 23 18:44:24.242006 systemd[1]: Created slice kubepods-burstable-pod8028d3c5_d584_42cd_9465_7a7c9d6e309a.slice - libcontainer container kubepods-burstable-pod8028d3c5_d584_42cd_9465_7a7c9d6e309a.slice. Jan 23 18:44:24.297579 kubelet[2828]: I0123 18:44:24.297130 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8028d3c5-d584-42cd-9465-7a7c9d6e309a-config-volume\") pod \"coredns-674b8bbfcf-4g2js\" (UID: \"8028d3c5-d584-42cd-9465-7a7c9d6e309a\") " pod="kube-system/coredns-674b8bbfcf-4g2js" Jan 23 18:44:24.297579 kubelet[2828]: I0123 18:44:24.297270 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9482v\" (UniqueName: \"kubernetes.io/projected/8028d3c5-d584-42cd-9465-7a7c9d6e309a-kube-api-access-9482v\") pod \"coredns-674b8bbfcf-4g2js\" (UID: \"8028d3c5-d584-42cd-9465-7a7c9d6e309a\") " pod="kube-system/coredns-674b8bbfcf-4g2js" Jan 23 18:44:24.529613 kubelet[2828]: E0123 18:44:24.528902 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:24.530273 containerd[1599]: time="2026-01-23T18:44:24.529880418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8dnp,Uid:ae5012e5-a967-4430-82ac-dc7784837d1d,Namespace:kube-system,Attempt:0,}" Jan 23 18:44:24.559668 kubelet[2828]: E0123 18:44:24.559117 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:24.562852 containerd[1599]: time="2026-01-23T18:44:24.561163348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4g2js,Uid:8028d3c5-d584-42cd-9465-7a7c9d6e309a,Namespace:kube-system,Attempt:0,}" Jan 23 18:44:24.642715 containerd[1599]: time="2026-01-23T18:44:24.642516957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8dnp,Uid:ae5012e5-a967-4430-82ac-dc7784837d1d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4ee4fe15eda20159619928b8cd592f35eed73a8ad3ef432ff4f35438d576c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:44:24.643071 kubelet[2828]: E0123 18:44:24.642924 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4ee4fe15eda20159619928b8cd592f35eed73a8ad3ef432ff4f35438d576c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:44:24.643152 kubelet[2828]: E0123 18:44:24.643088 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4ee4fe15eda20159619928b8cd592f35eed73a8ad3ef432ff4f35438d576c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-p8dnp" Jan 23 18:44:24.643152 kubelet[2828]: E0123 18:44:24.643115 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4ee4fe15eda20159619928b8cd592f35eed73a8ad3ef432ff4f35438d576c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-p8dnp" Jan 23 18:44:24.643254 kubelet[2828]: E0123 18:44:24.643176 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p8dnp_kube-system(ae5012e5-a967-4430-82ac-dc7784837d1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p8dnp_kube-system(ae5012e5-a967-4430-82ac-dc7784837d1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeb4ee4fe15eda20159619928b8cd592f35eed73a8ad3ef432ff4f35438d576c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-p8dnp" podUID="ae5012e5-a967-4430-82ac-dc7784837d1d" Jan 23 18:44:24.664132 containerd[1599]: time="2026-01-23T18:44:24.663972113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4g2js,Uid:8028d3c5-d584-42cd-9465-7a7c9d6e309a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da77b196274b80c177b82caf709d4723dfc9a3b4e206b72418f94d80a0d1a0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:44:24.665038 kubelet[2828]: E0123 18:44:24.664810 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da77b196274b80c177b82caf709d4723dfc9a3b4e206b72418f94d80a0d1a0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:44:24.665038 kubelet[2828]: E0123 18:44:24.664931 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da77b196274b80c177b82caf709d4723dfc9a3b4e206b72418f94d80a0d1a0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-4g2js" Jan 23 18:44:24.665038 kubelet[2828]: E0123 18:44:24.664953 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da77b196274b80c177b82caf709d4723dfc9a3b4e206b72418f94d80a0d1a0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-4g2js" Jan 23 18:44:24.665195 kubelet[2828]: E0123 18:44:24.665080 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4g2js_kube-system(8028d3c5-d584-42cd-9465-7a7c9d6e309a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4g2js_kube-system(8028d3c5-d584-42cd-9465-7a7c9d6e309a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9da77b196274b80c177b82caf709d4723dfc9a3b4e206b72418f94d80a0d1a0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-4g2js" podUID="8028d3c5-d584-42cd-9465-7a7c9d6e309a" Jan 23 18:44:24.712536 kubelet[2828]: E0123 18:44:24.711897 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:24.742988 containerd[1599]: time="2026-01-23T18:44:24.742769170Z" level=info msg="CreateContainer within sandbox \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 18:44:24.793731 containerd[1599]: time="2026-01-23T18:44:24.790592380Z" level=info msg="Container e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:44:24.813091 containerd[1599]: time="2026-01-23T18:44:24.812781378Z" level=info msg="CreateContainer within sandbox \"4215e9af734a937e88b81a8fda9b11a53e16ac0211fc6d4c7e4e98e501e510a5\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8\"" Jan 23 18:44:24.815687 containerd[1599]: time="2026-01-23T18:44:24.815626549Z" level=info msg="StartContainer for \"e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8\"" Jan 23 18:44:24.817783 containerd[1599]: time="2026-01-23T18:44:24.817757442Z" level=info msg="connecting to shim e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8" address="unix:///run/containerd/s/46cb7f60263bec529656eb96ce71c3aa5eed8a8d6526bbee608003ec2e665aae" protocol=ttrpc version=3 Jan 23 18:44:24.882151 systemd[1]: Started cri-containerd-e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8.scope - libcontainer container e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8. Jan 23 18:44:24.997704 containerd[1599]: time="2026-01-23T18:44:24.997189687Z" level=info msg="StartContainer for \"e6bc2b4cfcf05e8b8de2ea878b54f2e2ddf5f9d05d716c39857a5e21373531f8\" returns successfully" Jan 23 18:44:25.737676 kubelet[2828]: E0123 18:44:25.737242 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:25.817227 kubelet[2828]: I0123 18:44:25.816569 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tq6tm" podStartSLOduration=4.23310576 podStartE2EDuration="13.816292681s" podCreationTimestamp="2026-01-23 18:44:12 +0000 UTC" firstStartedPulling="2026-01-23 18:44:14.108892982 +0000 UTC m=+7.847867287" lastFinishedPulling="2026-01-23 18:44:23.692079903 +0000 UTC m=+17.431054208" observedRunningTime="2026-01-23 18:44:25.816239851 +0000 UTC m=+19.555214176" watchObservedRunningTime="2026-01-23 18:44:25.816292681 +0000 UTC m=+19.555266986" Jan 23 18:44:26.368115 systemd-networkd[1525]: flannel.1: Link UP Jan 23 18:44:26.368203 systemd-networkd[1525]: flannel.1: Gained carrier Jan 23 18:44:26.743697 kubelet[2828]: E0123 18:44:26.742869 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:28.096292 systemd-networkd[1525]: flannel.1: Gained IPv6LL Jan 23 18:44:37.235136 kubelet[2828]: E0123 18:44:37.234948 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:37.238545 containerd[1599]: time="2026-01-23T18:44:37.237755344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4g2js,Uid:8028d3c5-d584-42cd-9465-7a7c9d6e309a,Namespace:kube-system,Attempt:0,}" Jan 23 18:44:37.303180 systemd-networkd[1525]: cni0: Link UP Jan 23 18:44:37.303192 systemd-networkd[1525]: cni0: Gained carrier Jan 23 18:44:37.313838 systemd-networkd[1525]: cni0: Lost carrier Jan 23 18:44:37.331889 systemd-networkd[1525]: veth47320331: Link UP Jan 23 18:44:37.350437 kernel: cni0: port 1(veth47320331) entered blocking state Jan 23 18:44:37.350616 kernel: cni0: port 1(veth47320331) entered disabled state Jan 23 18:44:37.356719 kernel: veth47320331: entered allmulticast mode Jan 23 18:44:37.363589 kernel: veth47320331: entered promiscuous mode Jan 23 18:44:37.418247 kernel: cni0: port 1(veth47320331) entered blocking state Jan 23 18:44:37.418600 kernel: cni0: port 1(veth47320331) entered forwarding state Jan 23 18:44:37.418742 systemd-networkd[1525]: veth47320331: Gained carrier Jan 23 18:44:37.419853 systemd-networkd[1525]: cni0: Gained carrier Jan 23 18:44:37.430625 containerd[1599]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 23 18:44:37.430625 containerd[1599]: delegateAdd: netconf sent to delegate plugin: Jan 23 18:44:37.528662 containerd[1599]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T18:44:37.527731680Z" level=info msg="connecting to shim 14c609c7d65e4c78256ad390df457d0ec657a84091bce9c160913492d32000c9" address="unix:///run/containerd/s/026386279d9a4d685edd4b03299da9ea43ea277b3d41c74c448ef41ebff23438" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:44:37.691916 systemd[1]: Started cri-containerd-14c609c7d65e4c78256ad390df457d0ec657a84091bce9c160913492d32000c9.scope - libcontainer container 14c609c7d65e4c78256ad390df457d0ec657a84091bce9c160913492d32000c9. Jan 23 18:44:37.752064 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:44:37.943974 containerd[1599]: time="2026-01-23T18:44:37.943743822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4g2js,Uid:8028d3c5-d584-42cd-9465-7a7c9d6e309a,Namespace:kube-system,Attempt:0,} returns sandbox id \"14c609c7d65e4c78256ad390df457d0ec657a84091bce9c160913492d32000c9\"" Jan 23 18:44:37.949816 kubelet[2828]: E0123 18:44:37.948884 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:37.964245 containerd[1599]: time="2026-01-23T18:44:37.963268569Z" level=info msg="CreateContainer within sandbox \"14c609c7d65e4c78256ad390df457d0ec657a84091bce9c160913492d32000c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:44:37.996853 containerd[1599]: time="2026-01-23T18:44:37.996661150Z" level=info msg="Container 5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:44:38.021647 containerd[1599]: time="2026-01-23T18:44:38.021165532Z" level=info msg="CreateContainer within sandbox \"14c609c7d65e4c78256ad390df457d0ec657a84091bce9c160913492d32000c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866\"" Jan 23 18:44:38.027769 containerd[1599]: time="2026-01-23T18:44:38.026971153Z" level=info msg="StartContainer for \"5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866\"" Jan 23 18:44:38.030194 containerd[1599]: time="2026-01-23T18:44:38.029798045Z" level=info msg="connecting to shim 5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866" address="unix:///run/containerd/s/026386279d9a4d685edd4b03299da9ea43ea277b3d41c74c448ef41ebff23438" protocol=ttrpc version=3 Jan 23 18:44:38.105817 systemd[1]: Started cri-containerd-5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866.scope - libcontainer container 5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866. Jan 23 18:44:38.230620 kubelet[2828]: E0123 18:44:38.229275 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:38.232607 containerd[1599]: time="2026-01-23T18:44:38.231990400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8dnp,Uid:ae5012e5-a967-4430-82ac-dc7784837d1d,Namespace:kube-system,Attempt:0,}" Jan 23 18:44:38.329813 containerd[1599]: time="2026-01-23T18:44:38.329773040Z" level=info msg="StartContainer for \"5d6a26fb3dbebee62114de33869435c1a4f1ecb7425336b25b248cd89b2f4866\" returns successfully" Jan 23 18:44:38.370057 systemd-networkd[1525]: vethb160e215: Link UP Jan 23 18:44:38.387282 kernel: cni0: port 2(vethb160e215) entered blocking state Jan 23 18:44:38.388115 kernel: cni0: port 2(vethb160e215) entered disabled state Jan 23 18:44:38.388164 kernel: vethb160e215: entered allmulticast mode Jan 23 18:44:38.399628 kernel: vethb160e215: entered promiscuous mode Jan 23 18:44:38.440914 kernel: cni0: port 2(vethb160e215) entered blocking state Jan 23 18:44:38.441003 kernel: cni0: port 2(vethb160e215) entered forwarding state Jan 23 18:44:38.441256 systemd-networkd[1525]: vethb160e215: Gained carrier Jan 23 18:44:38.460276 containerd[1599]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000104950), "name":"cbr0", "type":"bridge"} Jan 23 18:44:38.460276 containerd[1599]: delegateAdd: netconf sent to delegate plugin: Jan 23 18:44:38.611114 containerd[1599]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T18:44:38.610777714Z" level=info msg="connecting to shim 6eda84b8c92fa9e4dea9a927658b95b260e7a7b392e49cab653befc3e78daa34" address="unix:///run/containerd/s/3cec3c58f93a4d41c1ca027670fa67a0cc0eb8ebb49b9867be55f0686b353539" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:44:38.813015 systemd[1]: Started cri-containerd-6eda84b8c92fa9e4dea9a927658b95b260e7a7b392e49cab653befc3e78daa34.scope - libcontainer container 6eda84b8c92fa9e4dea9a927658b95b260e7a7b392e49cab653befc3e78daa34. Jan 23 18:44:38.819787 kubelet[2828]: E0123 18:44:38.819759 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:38.851242 systemd-networkd[1525]: cni0: Gained IPv6LL Jan 23 18:44:38.885817 kubelet[2828]: I0123 18:44:38.882606 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4g2js" podStartSLOduration=26.878297609 podStartE2EDuration="26.878297609s" podCreationTimestamp="2026-01-23 18:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:44:38.844732805 +0000 UTC m=+32.583707109" watchObservedRunningTime="2026-01-23 18:44:38.878297609 +0000 UTC m=+32.617271924" Jan 23 18:44:38.931873 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:44:38.976015 systemd-networkd[1525]: veth47320331: Gained IPv6LL Jan 23 18:44:39.136957 containerd[1599]: time="2026-01-23T18:44:39.135915563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8dnp,Uid:ae5012e5-a967-4430-82ac-dc7784837d1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eda84b8c92fa9e4dea9a927658b95b260e7a7b392e49cab653befc3e78daa34\"" Jan 23 18:44:39.138660 kubelet[2828]: E0123 18:44:39.138012 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:39.150446 containerd[1599]: time="2026-01-23T18:44:39.149090756Z" level=info msg="CreateContainer within sandbox \"6eda84b8c92fa9e4dea9a927658b95b260e7a7b392e49cab653befc3e78daa34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:44:39.182609 containerd[1599]: time="2026-01-23T18:44:39.181866755Z" level=info msg="Container d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:44:39.205115 containerd[1599]: time="2026-01-23T18:44:39.203754468Z" level=info msg="CreateContainer within sandbox \"6eda84b8c92fa9e4dea9a927658b95b260e7a7b392e49cab653befc3e78daa34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4\"" Jan 23 18:44:39.205115 containerd[1599]: time="2026-01-23T18:44:39.205055875Z" level=info msg="StartContainer for \"d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4\"" Jan 23 18:44:39.209072 containerd[1599]: time="2026-01-23T18:44:39.208907057Z" level=info msg="connecting to shim d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4" address="unix:///run/containerd/s/3cec3c58f93a4d41c1ca027670fa67a0cc0eb8ebb49b9867be55f0686b353539" protocol=ttrpc version=3 Jan 23 18:44:39.273119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049684471.mount: Deactivated successfully. Jan 23 18:44:39.325105 systemd[1]: Started cri-containerd-d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4.scope - libcontainer container d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4. Jan 23 18:44:39.533177 containerd[1599]: time="2026-01-23T18:44:39.532756666Z" level=info msg="StartContainer for \"d9cda50d12e54da59d0ac4ab97c709213bb515d47621e168f136a6dc17046bb4\" returns successfully" Jan 23 18:44:39.829642 kubelet[2828]: E0123 18:44:39.828994 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:39.829642 kubelet[2828]: E0123 18:44:39.829139 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:39.985813 kubelet[2828]: I0123 18:44:39.985222 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p8dnp" podStartSLOduration=27.985198097 podStartE2EDuration="27.985198097s" podCreationTimestamp="2026-01-23 18:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:44:39.9850235 +0000 UTC m=+33.723997806" watchObservedRunningTime="2026-01-23 18:44:39.985198097 +0000 UTC m=+33.724172402" Jan 23 18:44:40.450819 systemd-networkd[1525]: vethb160e215: Gained IPv6LL Jan 23 18:44:40.845260 kubelet[2828]: E0123 18:44:40.843301 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:40.845260 kubelet[2828]: E0123 18:44:40.844030 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:41.848684 kubelet[2828]: E0123 18:44:41.848018 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:17.229189 kubelet[2828]: E0123 18:45:17.228862 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:20.232714 kubelet[2828]: E0123 18:45:20.231878 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:28.230039 kubelet[2828]: E0123 18:45:28.229104 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:38.228903 kubelet[2828]: E0123 18:45:38.228760 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:43.236026 kubelet[2828]: E0123 18:45:43.235632 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:49.230930 kubelet[2828]: E0123 18:45:49.230649 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:57.232700 kubelet[2828]: E0123 18:45:57.232189 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:46:21.636606 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:37744.service - OpenSSH per-connection server daemon (10.0.0.1:37744). Jan 23 18:46:21.737798 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 37744 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:21.741078 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:21.752576 systemd-logind[1584]: New session 7 of user core. Jan 23 18:46:21.765837 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:46:21.950035 sshd[4155]: Connection closed by 10.0.0.1 port 37744 Jan 23 18:46:21.950301 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:21.958007 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:37744.service: Deactivated successfully. Jan 23 18:46:21.962931 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:46:21.965610 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:46:21.969643 systemd-logind[1584]: Removed session 7. Jan 23 18:46:25.229488 kubelet[2828]: E0123 18:46:25.229304 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:46:26.986193 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). Jan 23 18:46:27.100804 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:27.104222 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:27.114613 systemd-logind[1584]: New session 8 of user core. Jan 23 18:46:27.132741 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:46:27.341192 sshd[4198]: Connection closed by 10.0.0.1 port 56944 Jan 23 18:46:27.341551 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:27.350096 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:56944.service: Deactivated successfully. Jan 23 18:46:27.356923 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:46:27.359162 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:46:27.361902 systemd-logind[1584]: Removed session 8. Jan 23 18:46:32.357558 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:60928.service - OpenSSH per-connection server daemon (10.0.0.1:60928). Jan 23 18:46:32.453390 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 60928 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:32.457271 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:32.468827 systemd-logind[1584]: New session 9 of user core. Jan 23 18:46:32.485800 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:46:32.657554 sshd[4243]: Connection closed by 10.0.0.1 port 60928 Jan 23 18:46:32.657997 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:32.665780 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:60928.service: Deactivated successfully. Jan 23 18:46:32.669802 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:46:32.672077 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:46:32.674711 systemd-logind[1584]: Removed session 9. Jan 23 18:46:37.681776 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:60944.service - OpenSSH per-connection server daemon (10.0.0.1:60944). Jan 23 18:46:37.763132 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 60944 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:37.766778 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:37.776587 systemd-logind[1584]: New session 10 of user core. Jan 23 18:46:37.787785 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:46:37.958068 sshd[4281]: Connection closed by 10.0.0.1 port 60944 Jan 23 18:46:37.960764 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:37.973264 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:60944.service: Deactivated successfully. Jan 23 18:46:37.976712 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:46:37.979726 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:46:37.984117 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:60958.service - OpenSSH per-connection server daemon (10.0.0.1:60958). Jan 23 18:46:37.987304 systemd-logind[1584]: Removed session 10. Jan 23 18:46:38.071701 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 60958 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:38.076891 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:38.090555 systemd-logind[1584]: New session 11 of user core. Jan 23 18:46:38.100597 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:46:38.327762 sshd[4313]: Connection closed by 10.0.0.1 port 60958 Jan 23 18:46:38.328885 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:38.347105 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:60958.service: Deactivated successfully. Jan 23 18:46:38.352246 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:46:38.357485 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:46:38.362880 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:60974.service - OpenSSH per-connection server daemon (10.0.0.1:60974). Jan 23 18:46:38.365954 systemd-logind[1584]: Removed session 11. Jan 23 18:46:38.465094 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 60974 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:38.468214 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:38.483242 systemd-logind[1584]: New session 12 of user core. Jan 23 18:46:38.496882 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:46:38.686761 sshd[4328]: Connection closed by 10.0.0.1 port 60974 Jan 23 18:46:38.687083 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:38.701216 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:46:38.702200 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:60974.service: Deactivated successfully. Jan 23 18:46:38.707561 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:46:38.715925 systemd-logind[1584]: Removed session 12. Jan 23 18:46:43.704557 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Jan 23 18:46:43.806025 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:43.810131 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:43.822722 systemd-logind[1584]: New session 13 of user core. Jan 23 18:46:43.831980 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:46:44.001231 sshd[4365]: Connection closed by 10.0.0.1 port 60730 Jan 23 18:46:44.001307 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:44.010902 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:60730.service: Deactivated successfully. Jan 23 18:46:44.014184 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:46:44.016503 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:46:44.019716 systemd-logind[1584]: Removed session 13. Jan 23 18:46:46.230548 kubelet[2828]: E0123 18:46:46.229531 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:46:49.021762 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:60742.service - OpenSSH per-connection server daemon (10.0.0.1:60742). Jan 23 18:46:49.145927 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 60742 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:49.150185 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:49.178202 systemd-logind[1584]: New session 14 of user core. Jan 23 18:46:49.191099 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:46:49.429110 sshd[4404]: Connection closed by 10.0.0.1 port 60742 Jan 23 18:46:49.429825 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:49.438907 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:60742.service: Deactivated successfully. Jan 23 18:46:49.444735 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:46:49.448861 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:46:49.454141 systemd-logind[1584]: Removed session 14. Jan 23 18:46:50.231759 kubelet[2828]: E0123 18:46:50.231132 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:46:54.449170 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:44024.service - OpenSSH per-connection server daemon (10.0.0.1:44024). Jan 23 18:46:54.549875 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 44024 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:54.553511 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:54.565726 systemd-logind[1584]: New session 15 of user core. Jan 23 18:46:54.585992 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:46:54.762095 sshd[4441]: Connection closed by 10.0.0.1 port 44024 Jan 23 18:46:54.762888 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:54.779554 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:44024.service: Deactivated successfully. Jan 23 18:46:54.782835 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:46:54.785180 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:46:54.791283 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:44040.service - OpenSSH per-connection server daemon (10.0.0.1:44040). Jan 23 18:46:54.793014 systemd-logind[1584]: Removed session 15. Jan 23 18:46:54.907983 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 44040 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:54.911852 sshd-session[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:54.924025 systemd-logind[1584]: New session 16 of user core. Jan 23 18:46:54.936795 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:46:55.352716 sshd[4459]: Connection closed by 10.0.0.1 port 44040 Jan 23 18:46:55.353187 sshd-session[4454]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:55.366742 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:44040.service: Deactivated successfully. Jan 23 18:46:55.371236 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:46:55.375991 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:46:55.380108 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:44050.service - OpenSSH per-connection server daemon (10.0.0.1:44050). Jan 23 18:46:55.381577 systemd-logind[1584]: Removed session 16. Jan 23 18:46:55.488107 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 44050 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:55.492490 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:55.505097 systemd-logind[1584]: New session 17 of user core. Jan 23 18:46:55.525052 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:46:56.424699 sshd[4475]: Connection closed by 10.0.0.1 port 44050 Jan 23 18:46:56.427135 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:56.443170 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:44050.service: Deactivated successfully. Jan 23 18:46:56.454722 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:46:56.457761 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:46:56.468881 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:44054.service - OpenSSH per-connection server daemon (10.0.0.1:44054). Jan 23 18:46:56.471972 systemd-logind[1584]: Removed session 17. Jan 23 18:46:56.551766 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 44054 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:56.554749 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:56.565800 systemd-logind[1584]: New session 18 of user core. Jan 23 18:46:56.579882 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:46:56.986507 sshd[4498]: Connection closed by 10.0.0.1 port 44054 Jan 23 18:46:56.988807 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:57.001302 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:44054.service: Deactivated successfully. Jan 23 18:46:57.008491 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:46:57.013680 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:46:57.019036 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:44056.service - OpenSSH per-connection server daemon (10.0.0.1:44056). Jan 23 18:46:57.022098 systemd-logind[1584]: Removed session 18. Jan 23 18:46:57.113262 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 44056 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:46:57.116014 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:46:57.128932 systemd-logind[1584]: New session 19 of user core. Jan 23 18:46:57.148029 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:46:57.311305 sshd[4513]: Connection closed by 10.0.0.1 port 44056 Jan 23 18:46:57.312790 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Jan 23 18:46:57.319847 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:44056.service: Deactivated successfully. Jan 23 18:46:57.323143 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:46:57.326783 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:46:57.330100 systemd-logind[1584]: Removed session 19. Jan 23 18:47:02.230156 kubelet[2828]: E0123 18:47:02.229924 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:47:02.330774 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:57326.service - OpenSSH per-connection server daemon (10.0.0.1:57326). Jan 23 18:47:02.433194 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 57326 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:47:02.437072 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:47:02.452773 systemd-logind[1584]: New session 20 of user core. Jan 23 18:47:02.464022 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:47:02.635057 sshd[4557]: Connection closed by 10.0.0.1 port 57326 Jan 23 18:47:02.635567 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Jan 23 18:47:02.642056 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:57326.service: Deactivated successfully. Jan 23 18:47:02.645166 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:47:02.649181 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:47:02.651944 systemd-logind[1584]: Removed session 20. Jan 23 18:47:07.658673 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:57342.service - OpenSSH per-connection server daemon (10.0.0.1:57342). Jan 23 18:47:07.760058 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 57342 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:47:07.764955 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:47:07.779059 systemd-logind[1584]: New session 21 of user core. Jan 23 18:47:07.792106 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:47:08.001278 sshd[4598]: Connection closed by 10.0.0.1 port 57342 Jan 23 18:47:08.002056 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Jan 23 18:47:08.010152 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:57342.service: Deactivated successfully. Jan 23 18:47:08.014019 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:47:08.017065 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:47:08.021472 systemd-logind[1584]: Removed session 21. Jan 23 18:47:09.229498 kubelet[2828]: E0123 18:47:09.228828 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:47:09.229498 kubelet[2828]: E0123 18:47:09.228956 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:47:13.027793 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:33670.service - OpenSSH per-connection server daemon (10.0.0.1:33670). Jan 23 18:47:13.215295 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 33670 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:47:13.219881 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:47:13.238307 systemd-logind[1584]: New session 22 of user core. Jan 23 18:47:13.242908 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:47:13.596185 sshd[4638]: Connection closed by 10.0.0.1 port 33670 Jan 23 18:47:13.598747 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jan 23 18:47:13.611710 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:33670.service: Deactivated successfully. Jan 23 18:47:13.619969 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:47:13.626561 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:47:13.632928 systemd-logind[1584]: Removed session 22. Jan 23 18:47:19.245186 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:33676.service - OpenSSH per-connection server daemon (10.0.0.1:33676). Jan 23 18:47:21.425872 kubelet[2828]: E0123 18:47:21.417928 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:47:23.128005 sshd[4672]: Accepted publickey for core from 10.0.0.1 port 33676 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:47:23.283473 sshd-session[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:47:24.294484 systemd-logind[1584]: New session 23 of user core. Jan 23 18:47:24.538845 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:47:24.723816 kubelet[2828]: E0123 18:47:24.716455 2828 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.477s" Jan 23 18:47:26.107200 sshd[4698]: Connection closed by 10.0.0.1 port 33676 Jan 23 18:47:26.118164 sshd-session[4672]: pam_unix(sshd:session): session closed for user core Jan 23 18:47:26.274269 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:33676.service: Deactivated successfully. Jan 23 18:47:26.303205 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:33676.service: Consumed 1.185s CPU time, 4M memory peak. Jan 23 18:47:26.318122 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:47:26.321244 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:47:26.325795 systemd-logind[1584]: Removed session 23. Jan 23 18:47:29.130743 systemd[1728]: Created slice background.slice - User Background Tasks Slice. Jan 23 18:47:29.136285 systemd[1728]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 23 18:47:29.204948 systemd[1728]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 23 18:47:31.136745 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:44168.service - OpenSSH per-connection server daemon (10.0.0.1:44168). Jan 23 18:47:31.258980 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 44168 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:47:31.263242 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:47:31.279772 systemd-logind[1584]: New session 24 of user core. Jan 23 18:47:31.291093 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:47:31.570542 sshd[4746]: Connection closed by 10.0.0.1 port 44168 Jan 23 18:47:31.570901 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Jan 23 18:47:31.579907 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:44168.service: Deactivated successfully. Jan 23 18:47:31.583796 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:47:31.586809 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:47:31.589924 systemd-logind[1584]: Removed session 24. Jan 23 18:47:32.233846 kubelet[2828]: E0123 18:47:32.233091 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"