Jan 20 02:24:09.668063 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 02:24:09.668104 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:24:09.668117 kernel: BIOS-provided physical RAM map: Jan 20 02:24:09.668129 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 02:24:09.668137 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 02:24:09.668144 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 02:24:09.668153 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 02:24:09.668161 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 02:24:09.668170 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 02:24:09.668179 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 02:24:09.668188 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 02:24:09.668196 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 02:24:09.668209 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 02:24:09.668217 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 02:24:09.668228 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 02:24:09.668237 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 02:24:09.668248 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 02:24:09.668262 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 02:24:09.668270 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 02:24:09.668279 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 02:24:09.668287 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 02:24:09.668295 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 02:24:09.668304 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 02:24:09.668314 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:24:09.668325 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 02:24:09.668334 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:24:09.668343 kernel: NX (Execute Disable) protection: active Jan 20 02:24:09.668351 kernel: APIC: Static calls initialized Jan 20 02:24:09.668364 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 02:24:09.668373 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 02:24:09.668503 kernel: extended physical RAM map: Jan 20 02:24:09.668516 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 02:24:09.668524 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 02:24:09.668532 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 02:24:09.668541 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 02:24:09.668552 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 02:24:09.669962 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 02:24:09.669977 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 02:24:09.669990 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 02:24:09.670008 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 02:24:09.670022 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 02:24:09.670033 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 02:24:09.670044 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 02:24:09.670055 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 02:24:09.670069 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 02:24:09.670082 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 02:24:09.670091 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 02:24:09.670101 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 02:24:09.670113 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 02:24:09.670125 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 02:24:09.670134 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 02:24:09.670145 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 02:24:09.670157 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 02:24:09.670167 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 02:24:09.670176 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 02:24:09.670190 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:24:09.670199 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 02:24:09.670210 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:24:09.670221 kernel: efi: EFI v2.7 by EDK II Jan 20 02:24:09.670233 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 02:24:09.670242 kernel: random: crng init done Jan 20 02:24:09.670251 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 02:24:09.670264 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 02:24:09.670274 kernel: secureboot: Secure boot disabled Jan 20 02:24:09.670283 kernel: SMBIOS 2.8 present. Jan 20 02:24:09.670296 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 02:24:09.670310 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:24:09.670321 kernel: Hypervisor detected: KVM Jan 20 02:24:09.670333 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 02:24:09.670343 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:24:09.670354 kernel: kvm-clock: using sched offset of 27520832644 cycles Jan 20 02:24:09.670366 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:24:09.670376 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:24:09.670499 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:24:09.670510 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:24:09.670522 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 02:24:09.670533 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 02:24:09.670548 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:24:09.672008 kernel: Using GB pages for direct mapping Jan 20 02:24:09.672025 kernel: ACPI: Early table checksum verification disabled Jan 20 02:24:09.672036 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 02:24:09.672049 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 02:24:09.672059 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:24:09.672068 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:24:09.672077 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 02:24:09.672087 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:24:09.672104 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:24:09.672116 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:24:09.672125 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:24:09.672135 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 02:24:09.672144 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 02:24:09.672153 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 02:24:09.672163 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 02:24:09.672173 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 02:24:09.672189 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 02:24:09.672199 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 02:24:09.672208 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 02:24:09.672218 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 02:24:09.672227 kernel: No NUMA configuration found Jan 20 02:24:09.672238 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 02:24:09.672248 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 02:24:09.672261 kernel: Zone ranges: Jan 20 02:24:09.672270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:24:09.672284 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 02:24:09.672293 kernel: Normal empty Jan 20 02:24:09.672302 kernel: Device empty Jan 20 02:24:09.672311 kernel: Movable zone start for each node Jan 20 02:24:09.672321 kernel: Early memory node ranges Jan 20 02:24:09.672331 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 02:24:09.672341 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 02:24:09.672351 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 02:24:09.672363 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 02:24:09.672374 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 02:24:09.673981 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 02:24:09.673994 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 02:24:09.674004 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 02:24:09.674013 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 02:24:09.674024 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:24:09.674050 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 02:24:09.674063 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 02:24:09.674073 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:24:09.674082 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 02:24:09.674095 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 02:24:09.674106 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 02:24:09.674115 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 02:24:09.674129 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 02:24:09.674138 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:24:09.674151 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:24:09.674162 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:24:09.674171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:24:09.674186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:24:09.674195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:24:09.674208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:24:09.674221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:24:09.674232 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:24:09.674242 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:24:09.674253 kernel: TSC deadline timer available Jan 20 02:24:09.674265 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:24:09.674277 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:24:09.674293 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:24:09.674305 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:24:09.674316 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:24:09.674328 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:24:09.674338 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:24:09.674350 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:24:09.674363 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:24:09.674374 kernel: kvm-guest: setup PV sched yield Jan 20 02:24:09.674497 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 02:24:09.674514 kernel: Booting paravirtualized kernel on KVM Jan 20 02:24:09.674523 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:24:09.674534 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:24:09.674547 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:24:09.674558 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:24:09.679860 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:24:09.679871 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:24:09.679884 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:24:09.679898 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:24:09.679916 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:24:09.679926 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:24:09.679937 kernel: Fallback order for Node 0: 0 Jan 20 02:24:09.679950 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 02:24:09.679960 kernel: Policy zone: DMA32 Jan 20 02:24:09.679970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:24:09.679979 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:24:09.679989 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:24:09.680007 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:24:09.680017 kernel: Dynamic Preempt: voluntary Jan 20 02:24:09.680027 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:24:09.680038 kernel: rcu: RCU event tracing is enabled. Jan 20 02:24:09.680049 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:24:09.680062 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:24:09.680072 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:24:09.680082 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:24:09.680092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:24:09.680107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:24:09.680119 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:24:09.680129 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:24:09.680139 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:24:09.680148 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:24:09.680160 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:24:09.680172 kernel: Console: colour dummy device 80x25 Jan 20 02:24:09.680182 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:24:09.680191 kernel: ACPI: Core revision 20240827 Jan 20 02:24:09.680205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:24:09.680217 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:24:09.680228 kernel: x2apic enabled Jan 20 02:24:09.680237 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:24:09.680247 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:24:09.680257 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:24:09.680270 kernel: kvm-guest: setup PV IPIs Jan 20 02:24:09.680281 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:24:09.680291 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:24:09.680304 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:24:09.680315 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:24:09.680327 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:24:09.680338 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:24:09.680348 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:24:09.680357 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:24:09.680367 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:24:09.680380 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:24:09.680501 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:24:09.680517 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:24:09.680530 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:24:09.680541 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:24:09.680553 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:24:09.680769 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:24:09.680782 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:24:09.680795 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:24:09.680805 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:24:09.680821 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:24:09.680833 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:24:09.680845 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:24:09.680857 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:24:09.680869 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:24:09.680880 kernel: landlock: Up and running. Jan 20 02:24:09.680892 kernel: SELinux: Initializing. Jan 20 02:24:09.680904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:24:09.680915 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:24:09.680933 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:24:09.680944 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:24:09.680957 kernel: signal: max sigframe size: 1776 Jan 20 02:24:09.680968 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:24:09.680980 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:24:09.680993 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:24:09.681004 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:24:09.681015 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:24:09.681028 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:24:09.681043 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:24:09.681056 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:24:09.681066 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:24:09.681079 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145388K reserved, 0K cma-reserved) Jan 20 02:24:09.681089 kernel: devtmpfs: initialized Jan 20 02:24:09.681102 kernel: x86/mm: Memory block size: 128MB Jan 20 02:24:09.681114 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 02:24:09.681125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 02:24:09.681137 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 02:24:09.681154 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 02:24:09.681165 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 02:24:09.681176 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 02:24:09.681190 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:24:09.681200 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:24:09.681212 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:24:09.681225 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:24:09.681236 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:24:09.681246 kernel: audit: type=2000 audit(1768875774.084:1): state=initialized audit_enabled=0 res=1 Jan 20 02:24:09.681264 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:24:09.681274 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:24:09.681285 kernel: cpuidle: using governor menu Jan 20 02:24:09.681295 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:24:09.681306 kernel: dca service started, version 1.12.1 Jan 20 02:24:09.681320 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 02:24:09.681330 kernel: PCI: Using configuration type 1 for base access Jan 20 02:24:09.681341 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:24:09.681354 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:24:09.681370 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:24:09.681382 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:24:09.681503 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:24:09.681515 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:24:09.681527 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:24:09.681539 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:24:09.681548 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:24:09.681559 kernel: ACPI: Interpreter enabled Jan 20 02:24:09.681767 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:24:09.681786 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:24:09.681797 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:24:09.681807 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:24:09.681816 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:24:09.681826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:24:09.684279 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:24:09.687072 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:24:09.687275 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:24:09.687291 kernel: PCI host bridge to bus 0000:00 Jan 20 02:24:09.692158 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:24:09.698510 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:24:09.703159 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:24:09.703336 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 02:24:09.703818 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 02:24:09.704002 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 02:24:09.704176 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:24:09.708172 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:24:09.709304 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:24:09.713028 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 02:24:09.713208 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 02:24:09.713377 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 02:24:09.717972 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:24:09.718300 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 27343 usecs Jan 20 02:24:09.719124 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:24:09.719294 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 02:24:09.719798 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 02:24:09.719980 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 02:24:09.720261 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:24:09.726902 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 02:24:09.727105 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 02:24:09.727279 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 02:24:09.727777 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:24:09.727950 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 02:24:09.728118 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 02:24:09.728526 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 02:24:09.728913 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 02:24:09.734932 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:24:09.738117 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:24:09.738871 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 43945 usecs Jan 20 02:24:09.742051 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:24:09.742242 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 02:24:09.742544 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 02:24:09.745182 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:24:09.745379 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 02:24:09.745524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:24:09.745536 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:24:09.745547 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:24:09.745758 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:24:09.745784 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:24:09.745797 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:24:09.745809 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:24:09.745820 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:24:09.745832 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:24:09.745845 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:24:09.745857 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:24:09.745867 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:24:09.745877 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:24:09.745896 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:24:09.745906 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:24:09.745918 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:24:09.745931 kernel: iommu: Default domain type: Translated Jan 20 02:24:09.745942 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:24:09.745954 kernel: efivars: Registered efivars operations Jan 20 02:24:09.745966 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:24:09.745977 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:24:09.745987 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 02:24:09.745998 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 02:24:09.746015 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 02:24:09.746026 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 02:24:09.746036 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 02:24:09.746046 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 02:24:09.746056 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 02:24:09.746067 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 02:24:09.746250 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:24:09.751732 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:24:09.752060 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:24:09.752079 kernel: vgaarb: loaded Jan 20 02:24:09.752093 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:24:09.752105 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:24:09.752115 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:24:09.752129 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:24:09.752140 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:24:09.752151 kernel: pnp: PnP ACPI init Jan 20 02:24:09.760922 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 02:24:09.760974 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:24:09.760988 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:24:09.760999 kernel: NET: Registered PF_INET protocol family Jan 20 02:24:09.761008 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:24:09.761018 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:24:09.761056 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:24:09.761071 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:24:09.761083 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:24:09.761096 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:24:09.761106 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:24:09.761116 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:24:09.761127 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:24:09.761139 kernel: NET: Registered PF_XDP protocol family Jan 20 02:24:09.761332 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 02:24:09.766899 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 02:24:09.767091 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:24:09.767256 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:24:09.771743 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:24:09.771915 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 02:24:09.772074 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 02:24:09.772230 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 02:24:09.772245 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:24:09.772256 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:24:09.772268 kernel: Initialise system trusted keyrings Jan 20 02:24:09.772288 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:24:09.772298 kernel: Key type asymmetric registered Jan 20 02:24:09.772308 kernel: Asymmetric key parser 'x509' registered Jan 20 02:24:09.772318 kernel: hrtimer: interrupt took 5303830 ns Jan 20 02:24:09.772328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:24:09.772342 kernel: io scheduler mq-deadline registered Jan 20 02:24:09.772352 kernel: io scheduler kyber registered Jan 20 02:24:09.772362 kernel: io scheduler bfq registered Jan 20 02:24:09.772372 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:24:09.776081 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:24:09.776104 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:24:09.776114 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:24:09.776128 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:24:09.776140 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:24:09.776151 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:24:09.776166 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:24:09.776176 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:24:09.777251 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:24:09.777276 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:24:09.779922 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:24:09.780097 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:23:57 UTC (1768875837) Jan 20 02:24:09.780256 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 02:24:09.780273 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:24:09.780297 kernel: efifb: probing for efifb Jan 20 02:24:09.780309 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 02:24:09.780320 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 02:24:09.780331 kernel: efifb: scrolling: redraw Jan 20 02:24:09.780345 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 02:24:09.780357 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 02:24:09.780368 kernel: fb0: EFI VGA frame buffer device Jan 20 02:24:09.780382 kernel: pstore: Using crash dump compression: deflate Jan 20 02:24:09.784930 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 02:24:09.784949 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:24:09.784959 kernel: Segment Routing with IPv6 Jan 20 02:24:09.784969 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:24:09.784982 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:24:09.784994 kernel: Key type dns_resolver registered Jan 20 02:24:09.785005 kernel: IPI shorthand broadcast: enabled Jan 20 02:24:09.785015 kernel: sched_clock: Marking stable (57397341338, 5033774121)->(69039205571, -6608090112) Jan 20 02:24:09.785025 kernel: registered taskstats version 1 Jan 20 02:24:09.785035 kernel: Loading compiled-in X.509 certificates Jan 20 02:24:09.785053 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 02:24:09.785063 kernel: Demotion targets for Node 0: null Jan 20 02:24:09.785073 kernel: Key type .fscrypt registered Jan 20 02:24:09.785083 kernel: Key type fscrypt-provisioning registered Jan 20 02:24:09.785095 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:24:09.785108 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:24:09.785118 kernel: ima: No architecture policies found Jan 20 02:24:09.785128 kernel: clk: Disabling unused clocks Jan 20 02:24:09.785138 kernel: Warning: unable to open an initial console. Jan 20 02:24:09.785155 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 02:24:09.785167 kernel: Write protecting the kernel read-only data: 40960k Jan 20 02:24:09.785177 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 02:24:09.785187 kernel: Run /init as init process Jan 20 02:24:09.785197 kernel: with arguments: Jan 20 02:24:09.785211 kernel: /init Jan 20 02:24:09.785222 kernel: with environment: Jan 20 02:24:09.785232 kernel: HOME=/ Jan 20 02:24:09.785242 kernel: TERM=linux Jan 20 02:24:09.785260 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:24:09.785277 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:24:09.785289 systemd[1]: Detected virtualization kvm. Jan 20 02:24:09.785299 systemd[1]: Detected architecture x86-64. Jan 20 02:24:09.785311 systemd[1]: Running in initrd. Jan 20 02:24:09.785325 systemd[1]: No hostname configured, using default hostname. Jan 20 02:24:09.785336 systemd[1]: Hostname set to . Jan 20 02:24:09.785350 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:24:09.785361 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:24:09.785375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:24:09.785817 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:24:09.785833 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:24:09.785845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:24:09.785860 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:24:09.785877 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:24:09.785889 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 02:24:09.785901 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 02:24:09.785914 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:24:09.785926 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:24:09.785937 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:24:09.785948 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:24:09.785958 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:24:09.785977 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:24:09.785988 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:24:09.785999 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:24:09.786010 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:24:09.786023 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:24:09.786036 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:24:09.786047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:24:09.786057 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:24:09.786068 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:24:09.786087 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:24:09.786098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:24:09.786109 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:24:09.786120 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:24:09.786134 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:24:09.786146 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:24:09.786156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:24:09.786167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:24:09.786230 systemd-journald[201]: Collecting audit messages is disabled. Jan 20 02:24:09.786268 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:24:09.786284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:24:09.786298 systemd-journald[201]: Journal started Jan 20 02:24:09.786323 systemd-journald[201]: Runtime Journal (/run/log/journal/354e1f0504204e29a8e06a72e4e1eb37) is 6M, max 48.1M, 42.1M free. Jan 20 02:24:09.830377 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:24:09.949985 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:24:10.218925 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:24:10.243767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:24:10.605185 systemd-modules-load[205]: Inserted module 'overlay' Jan 20 02:24:10.809269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:24:10.954941 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:24:11.114498 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:24:11.287963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:24:11.418936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:24:11.665867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:24:11.902251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:24:12.110193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:24:12.268258 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:24:12.567864 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:24:12.865018 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:24:13.004883 kernel: Bridge firewalling registered Jan 20 02:24:13.013971 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 20 02:24:13.032536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:24:13.120367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:24:13.628036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:24:13.747181 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:24:14.953289 systemd-resolved[293]: Positive Trust Anchors: Jan 20 02:24:14.959858 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:24:14.967760 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:24:15.003954 systemd-resolved[293]: Defaulting to hostname 'linux'. Jan 20 02:24:15.030920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:24:15.406365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:24:15.759751 kernel: SCSI subsystem initialized Jan 20 02:24:15.813188 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:24:15.997116 kernel: iscsi: registered transport (tcp) Jan 20 02:24:16.219791 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:24:16.220075 kernel: QLogic iSCSI HBA Driver Jan 20 02:24:16.760523 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:24:17.040175 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:24:17.127974 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:24:17.908150 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:24:18.004801 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:24:18.592670 kernel: raid6: avx2x4 gen() 8582 MB/s Jan 20 02:24:18.592768 kernel: raid6: avx2x2 gen() 1237 MB/s Jan 20 02:24:18.636900 kernel: raid6: avx2x1 gen() 5077 MB/s Jan 20 02:24:18.636985 kernel: raid6: using algorithm avx2x4 gen() 8582 MB/s Jan 20 02:24:18.694422 kernel: raid6: .... xor() 1346 MB/s, rmw enabled Jan 20 02:24:18.694829 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:24:18.814144 kernel: xor: automatically using best checksumming function avx Jan 20 02:24:20.603303 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:24:20.708553 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:24:20.736134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:24:20.930262 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 20 02:24:20.996826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:24:21.042031 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:24:21.462206 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 20 02:24:21.962094 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:24:22.041361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:24:22.721037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:24:22.766869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:24:23.501900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:24:23.502245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:24:23.557053 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:24:23.684806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:24:23.780371 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:24:23.870077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:24:23.907327 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:24:23.870399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:24:23.940850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:24:24.233783 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:24:24.298464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:24:24.370101 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 02:24:24.432139 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:24:24.432232 kernel: GPT:9289727 != 19775487 Jan 20 02:24:24.432253 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:24:24.440010 kernel: GPT:9289727 != 19775487 Jan 20 02:24:24.488711 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:24:24.488794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:24:24.795256 kernel: libata version 3.00 loaded. Jan 20 02:24:25.117014 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:24:25.387209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:24:25.598160 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:24:25.706796 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 02:24:25.848888 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:24:25.849209 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:24:25.899017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:24:26.294377 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:24:26.300961 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:24:26.301213 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:24:26.301423 kernel: scsi host0: ahci Jan 20 02:24:26.312013 kernel: scsi host1: ahci Jan 20 02:24:26.320875 kernel: scsi host2: ahci Jan 20 02:24:26.321414 kernel: AES CTR mode by8 optimization enabled Jan 20 02:24:26.321432 kernel: scsi host3: ahci Jan 20 02:24:26.069397 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:24:26.699209 kernel: scsi host4: ahci Jan 20 02:24:26.705325 kernel: scsi host5: ahci Jan 20 02:24:26.705901 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 20 02:24:26.705919 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 20 02:24:26.705933 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 20 02:24:26.705947 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 20 02:24:26.705970 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 20 02:24:26.705985 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 20 02:24:26.706242 disk-uuid[570]: Primary Header is updated. Jan 20 02:24:26.706242 disk-uuid[570]: Secondary Entries is updated. Jan 20 02:24:26.706242 disk-uuid[570]: Secondary Header is updated. Jan 20 02:24:26.931195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:24:27.020942 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 02:24:27.021027 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:24:27.051104 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:24:27.087881 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:24:27.165262 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:24:27.224298 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:24:27.274263 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:24:27.320204 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:24:27.356005 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:24:27.356081 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:24:27.356100 kernel: ata3.00: applying bridge limits Jan 20 02:24:27.426250 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:24:27.426324 kernel: ata3.00: configured for UDMA/100 Jan 20 02:24:27.518071 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:24:27.897456 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:24:27.898038 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:24:27.992420 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:24:28.069823 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:24:28.086028 disk-uuid[591]: The operation has completed successfully. Jan 20 02:24:29.206784 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:24:29.207155 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:24:29.271370 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 02:24:29.447915 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:24:29.558262 sh[649]: Success Jan 20 02:24:29.519454 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:24:29.719442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:24:29.794009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:24:29.944445 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:24:30.424491 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:24:30.624334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:24:30.624380 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:24:30.624395 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:24:31.171833 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 02:24:31.617204 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:24:31.705956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 02:24:31.903320 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 02:24:32.061867 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (672) Jan 20 02:24:32.118093 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 02:24:32.118496 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:32.392218 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:24:32.392410 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:24:32.413462 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 02:24:32.457024 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:24:32.559165 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:24:32.710241 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:24:32.796022 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:24:33.349401 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (704) Jan 20 02:24:33.413745 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:33.413828 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:33.647810 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:24:33.647910 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:24:33.846769 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:33.988865 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:24:34.054066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:24:37.220029 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:24:37.476360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:24:38.138176 systemd-networkd[847]: lo: Link UP Jan 20 02:24:38.139177 systemd-networkd[847]: lo: Gained carrier Jan 20 02:24:38.149201 systemd-networkd[847]: Enumeration completed Jan 20 02:24:38.162349 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:24:38.171464 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:24:38.171473 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:24:38.177465 systemd-networkd[847]: eth0: Link UP Jan 20 02:24:38.222155 systemd-networkd[847]: eth0: Gained carrier Jan 20 02:24:38.222179 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:24:38.238277 systemd[1]: Reached target network.target - Network. Jan 20 02:24:38.446288 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:24:39.011136 ignition[768]: Ignition 2.22.0 Jan 20 02:24:39.018109 ignition[768]: Stage: fetch-offline Jan 20 02:24:39.018280 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:39.018299 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:39.018860 ignition[768]: parsed url from cmdline: "" Jan 20 02:24:39.018866 ignition[768]: no config URL provided Jan 20 02:24:39.018949 ignition[768]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:24:39.018964 ignition[768]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:24:39.019033 ignition[768]: op(1): [started] loading QEMU firmware config module Jan 20 02:24:39.019040 ignition[768]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:24:39.212954 ignition[768]: op(1): [finished] loading QEMU firmware config module Jan 20 02:24:39.747423 systemd-networkd[847]: eth0: Gained IPv6LL Jan 20 02:24:40.817890 ignition[768]: parsing config with SHA512: 686c4a419b7966c1cf64c2736f66b5c1db8814e9552124b5cdd8045771bf478c4e5e236e39976b782be3faa7f686ef81de4e2979a6ae9912c41cf69db5e16904 Jan 20 02:24:41.587430 unknown[768]: fetched base config from "system" Jan 20 02:24:41.587450 unknown[768]: fetched user config from "qemu" Jan 20 02:24:41.602505 ignition[768]: fetch-offline: fetch-offline passed Jan 20 02:24:41.603009 ignition[768]: Ignition finished successfully Jan 20 02:24:41.742276 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:24:41.825211 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:24:41.849836 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:24:44.545409 ignition[855]: Ignition 2.22.0 Jan 20 02:24:44.547916 ignition[855]: Stage: kargs Jan 20 02:24:44.561079 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:44.561099 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:44.601257 ignition[855]: kargs: kargs passed Jan 20 02:24:44.601367 ignition[855]: Ignition finished successfully Jan 20 02:24:44.813048 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:24:44.922270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:24:46.103499 ignition[864]: Ignition 2.22.0 Jan 20 02:24:46.190131 ignition[864]: Stage: disks Jan 20 02:24:46.191500 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:46.191521 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:46.284131 ignition[864]: disks: disks passed Jan 20 02:24:46.306217 ignition[864]: Ignition finished successfully Jan 20 02:24:46.440312 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:24:46.509515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:24:46.643267 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:24:46.717363 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:24:46.717552 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:24:46.867164 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:24:47.031910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:24:47.616783 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 02:24:47.702837 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:24:47.823421 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:24:50.326087 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 02:24:50.343018 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:24:50.367199 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:24:50.480015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:24:50.549965 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:24:50.587989 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:24:50.588087 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:24:50.588135 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:24:50.884092 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:24:50.921917 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jan 20 02:24:50.973875 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:51.012140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:51.035264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:24:51.245362 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:24:51.245459 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:24:51.283040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:24:51.763067 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 02:24:51.959814 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jan 20 02:24:52.122242 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 02:24:52.267100 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 02:24:53.782218 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:24:53.833051 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:24:53.935179 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:24:54.089099 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:24:54.158449 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:54.536926 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:24:54.774869 ignition[996]: INFO : Ignition 2.22.0 Jan 20 02:24:54.774869 ignition[996]: INFO : Stage: mount Jan 20 02:24:54.774869 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:54.774869 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:54.957150 ignition[996]: INFO : mount: mount passed Jan 20 02:24:54.957150 ignition[996]: INFO : Ignition finished successfully Jan 20 02:24:54.963477 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:24:55.153090 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:24:55.350406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:24:55.613439 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Jan 20 02:24:55.664900 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:55.664984 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:55.882951 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:24:55.883028 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:24:55.921530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:24:56.389462 ignition[1027]: INFO : Ignition 2.22.0 Jan 20 02:24:56.389462 ignition[1027]: INFO : Stage: files Jan 20 02:24:56.389462 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:56.389462 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:56.570540 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:24:56.631150 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:24:56.631150 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:24:56.776242 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:24:56.776242 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:24:56.776242 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:24:56.765167 unknown[1027]: wrote ssh authorized keys file for user: core Jan 20 02:24:57.043009 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:24:57.043009 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 02:24:57.431342 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:24:58.196539 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:24:58.196539 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:58.420490 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 02:24:59.565248 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 02:25:01.263699 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:25:01.263699 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 02:25:01.449538 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:25:01.888040 ignition[1027]: INFO : files: files passed Jan 20 02:25:01.888040 ignition[1027]: INFO : Ignition finished successfully Jan 20 02:25:01.840305 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:25:02.323206 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:25:02.359434 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:25:02.556516 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:25:02.557044 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:25:02.695033 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:25:02.746523 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:25:02.746523 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:25:02.719255 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:25:02.956470 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:25:02.801180 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:25:03.152325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:25:03.746465 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:25:03.747103 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:25:03.794550 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:25:03.810946 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:25:03.811105 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:25:03.965754 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:25:04.683325 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:25:04.768217 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:25:05.229479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:25:05.396395 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:25:05.589409 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:25:05.736743 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:25:05.737965 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:25:06.427482 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:25:06.522300 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:25:06.609499 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:25:06.934361 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:25:07.103905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:25:07.330246 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:25:07.460078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:25:07.741332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:25:07.840319 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:25:08.219985 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:25:08.353019 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:25:08.413327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:25:08.464135 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:25:08.678480 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:25:08.719424 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:25:08.833400 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:25:08.840024 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:25:08.956491 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:25:08.958096 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:25:09.062512 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:25:09.070470 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:25:09.243478 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:25:09.257188 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:25:09.266284 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:25:09.494960 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:25:09.535338 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:25:09.668263 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:25:09.689896 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:25:09.828420 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:25:09.835441 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:25:09.949529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:25:09.964781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:25:10.199052 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:25:10.209226 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:25:10.407946 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:25:10.499018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:25:10.499367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:25:10.806515 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:25:10.867183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:25:10.867429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:25:11.114114 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:25:11.114401 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:25:11.394744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:25:11.407102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:25:11.538779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:25:11.675364 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:25:11.677390 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:25:11.964536 ignition[1082]: INFO : Ignition 2.22.0 Jan 20 02:25:11.964536 ignition[1082]: INFO : Stage: umount Jan 20 02:25:12.024386 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:25:12.024386 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:25:12.139540 ignition[1082]: INFO : umount: umount passed Jan 20 02:25:12.139540 ignition[1082]: INFO : Ignition finished successfully Jan 20 02:25:12.143402 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:25:12.157333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:25:12.178193 systemd[1]: Stopped target network.target - Network. Jan 20 02:25:12.178262 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:25:12.178337 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:25:12.178411 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:25:12.178470 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:25:12.178543 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:25:12.178893 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:25:12.178981 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:25:12.179036 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:25:12.179114 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:25:12.179173 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:25:12.183099 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:25:12.541069 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:25:12.885495 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:25:12.892154 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:25:13.246973 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 02:25:13.247475 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:25:13.415441 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:25:13.630367 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 02:25:13.660132 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:25:13.724347 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:25:13.724455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:25:14.098559 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:25:14.154350 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:25:14.154497 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:25:14.250095 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:25:14.250217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:25:14.363136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:25:14.363332 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:25:14.738302 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:25:14.739023 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:25:14.865135 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:25:14.941356 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 02:25:14.950956 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:25:15.240530 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:25:15.303218 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:25:15.368959 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:25:15.369033 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:25:15.369133 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:25:15.369177 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:25:15.369236 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:25:15.369296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:25:15.369458 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:25:15.369518 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:25:15.369967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:25:15.370026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:25:15.392188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:25:15.397382 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:25:15.397485 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:25:15.834330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:25:15.836870 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:25:16.028000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:25:16.028103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:25:16.410748 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 02:25:16.410945 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 02:25:16.411011 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:25:16.414084 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:25:16.414220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:25:16.810881 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:25:16.811477 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:25:16.954468 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:25:17.005180 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:25:17.282501 systemd[1]: Switching root. Jan 20 02:25:17.524235 systemd-journald[201]: Journal stopped Jan 20 02:25:38.874218 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 20 02:25:38.874301 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:25:38.874322 kernel: SELinux: policy capability open_perms=1 Jan 20 02:25:38.885166 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:25:38.885190 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:25:38.885206 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:25:38.885220 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:25:38.885239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:25:38.885255 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:25:38.885269 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:25:38.885283 kernel: audit: type=1403 audit(1768875919.282:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 02:25:38.885300 systemd[1]: Successfully loaded SELinux policy in 733.436ms. Jan 20 02:25:38.885329 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 73.806ms. Jan 20 02:25:38.885346 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:25:38.885362 systemd[1]: Detected virtualization kvm. Jan 20 02:25:38.885377 systemd[1]: Detected architecture x86-64. Jan 20 02:25:38.885395 systemd[1]: Detected first boot. Jan 20 02:25:38.885410 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:25:38.885425 zram_generator::config[1132]: No configuration found. Jan 20 02:25:38.885441 kernel: Guest personality initialized and is inactive Jan 20 02:25:38.885457 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:25:38.885477 kernel: Initialized host personality Jan 20 02:25:38.885491 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:25:38.885505 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:25:38.885521 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 02:25:38.885539 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:25:38.885555 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:25:38.885885 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:25:38.885906 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:25:38.889903 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:25:38.889924 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:25:38.889940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:25:38.889956 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:25:38.890085 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:25:38.890101 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:25:38.890117 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:25:38.890132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:25:38.890147 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:25:38.890162 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:25:38.890179 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:25:38.890195 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:25:38.890214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:25:38.890232 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:25:38.890249 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:25:38.890265 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:25:38.890280 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:25:38.890295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:25:38.890310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:25:38.890325 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:25:38.890478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:25:38.890503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:25:38.890518 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:25:38.890532 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:25:38.890547 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:25:38.890758 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:25:38.890775 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:25:38.890790 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:25:38.890806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:25:38.890821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:25:38.890933 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:25:38.890951 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:25:38.898109 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:25:38.898132 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:25:38.898155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:38.898171 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:25:38.898186 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:25:38.898201 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:25:38.898216 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:25:38.898339 systemd[1]: Reached target machines.target - Containers. Jan 20 02:25:38.898356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:25:38.898371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:25:38.898387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:25:38.898402 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:25:38.898417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:25:38.898433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:25:38.898448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:25:38.898557 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:25:38.898768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:25:38.898785 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:25:38.898800 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:25:38.898815 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:25:38.898830 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:25:38.898845 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:25:38.898867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:25:38.904855 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:25:38.904881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:25:38.905104 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:25:38.905127 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:25:38.905147 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:25:38.905165 kernel: fuse: init (API version 7.41) Jan 20 02:25:38.905184 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:25:38.905201 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 02:25:38.905219 systemd[1]: Stopped verity-setup.service. Jan 20 02:25:38.905452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:38.905824 systemd-journald[1217]: Collecting audit messages is disabled. Jan 20 02:25:38.905864 systemd-journald[1217]: Journal started Jan 20 02:25:38.911843 systemd-journald[1217]: Runtime Journal (/run/log/journal/354e1f0504204e29a8e06a72e4e1eb37) is 6M, max 48.1M, 42.1M free. Jan 20 02:25:27.962782 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:25:28.058474 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:25:28.090503 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:25:28.098558 systemd[1]: systemd-journald.service: Consumed 4.238s CPU time. Jan 20 02:25:40.241759 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1050781277 wd_nsec: 1050780799 Jan 20 02:25:40.532819 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:25:40.566780 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:25:40.713782 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:25:40.916792 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:25:41.014301 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:25:41.127462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:25:41.228529 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:25:41.473372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:25:41.712369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:25:41.795228 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:25:41.800775 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:25:41.885964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:25:41.886903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:25:41.965767 kernel: ACPI: bus type drm_connector registered Jan 20 02:25:42.017895 kernel: loop: module loaded Jan 20 02:25:42.017556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:25:42.029399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:25:42.120858 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:25:42.121468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:25:42.193885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:25:42.199551 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:25:42.300847 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:25:42.301509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:25:42.424847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:25:42.519460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:25:42.584849 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:25:42.659152 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:25:42.743081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:25:43.144850 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:25:43.329822 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:25:43.489378 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:25:43.628360 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:25:43.628526 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:25:43.787448 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:25:43.922264 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:25:44.028964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:25:44.089305 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:25:44.209520 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:25:44.286319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:25:44.364978 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:25:44.450420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:25:44.471335 systemd-journald[1217]: Time spent on flushing to /var/log/journal/354e1f0504204e29a8e06a72e4e1eb37 is 1.077229s for 1065 entries. Jan 20 02:25:44.471335 systemd-journald[1217]: System Journal (/var/log/journal/354e1f0504204e29a8e06a72e4e1eb37) is 8M, max 195.6M, 187.6M free. Jan 20 02:25:45.719314 systemd-journald[1217]: Received client request to flush runtime journal. Jan 20 02:25:44.500453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:25:44.720208 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:25:44.918222 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:25:45.020284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:25:45.046240 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:25:45.092938 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:25:45.153296 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:25:45.648438 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:25:45.897782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:25:46.192868 kernel: loop0: detected capacity change from 0 to 128560 Jan 20 02:25:46.284506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:25:46.445311 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:25:46.470121 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:25:47.306331 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:25:47.825445 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:25:47.601765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:25:48.343918 kernel: loop1: detected capacity change from 0 to 224512 Jan 20 02:25:49.989239 kernel: loop2: detected capacity change from 0 to 110984 Jan 20 02:25:50.964017 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 20 02:25:51.029531 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 20 02:25:51.602478 kernel: loop3: detected capacity change from 0 to 128560 Jan 20 02:25:52.305548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:25:53.467884 kernel: loop4: detected capacity change from 0 to 224512 Jan 20 02:25:54.769970 kernel: loop5: detected capacity change from 0 to 110984 Jan 20 02:25:55.328831 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 02:25:55.377424 (sd-merge)[1275]: Merged extensions into '/usr'. Jan 20 02:25:56.109782 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:25:56.110441 systemd[1]: Reloading... Jan 20 02:26:00.630026 zram_generator::config[1302]: No configuration found. Jan 20 02:26:08.682420 systemd[1]: Reloading finished in 12501 ms. Jan 20 02:26:09.891940 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:26:11.803994 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:26:11.901314 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:26:12.137146 systemd[1]: Starting ensure-sysext.service... Jan 20 02:26:12.256083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:26:12.461270 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:26:12.461300 systemd[1]: Reloading... Jan 20 02:26:12.720756 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:26:12.723863 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:26:12.734044 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:26:12.738819 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:26:12.740933 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:26:12.741468 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jan 20 02:26:12.741703 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jan 20 02:26:12.829798 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:26:12.829816 systemd-tmpfiles[1342]: Skipping /boot Jan 20 02:26:12.954263 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:26:12.954338 systemd-tmpfiles[1342]: Skipping /boot Jan 20 02:26:13.281764 zram_generator::config[1369]: No configuration found. Jan 20 02:26:17.748907 systemd[1]: Reloading finished in 5286 ms. Jan 20 02:26:17.855424 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:26:18.007854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:26:18.167438 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:26:18.228935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:26:18.323133 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:26:18.502461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:26:18.598411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:26:18.783945 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:26:18.909029 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:26:19.024787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:26:19.025068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:26:19.104715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:26:19.234007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:26:19.354028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:26:19.398269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:26:19.398541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:26:19.398780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:26:19.422099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:26:19.422538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:26:19.423007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:26:19.444448 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:26:19.459500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:26:19.554155 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:26:19.651936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:26:19.652501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:26:19.729050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:26:19.753538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:26:19.816175 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:26:19.832019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:26:19.859844 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Jan 20 02:26:19.885342 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:26:20.093459 augenrules[1438]: No rules Jan 20 02:26:20.105520 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:26:20.110909 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:26:20.234827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:26:20.306778 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:26:20.371038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:26:20.409741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:26:20.524098 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:26:20.691449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:26:20.828523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:26:20.888891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:26:20.991185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:26:21.004692 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:26:21.014292 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:26:21.070156 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:26:21.091151 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:26:21.105057 augenrules[1448]: /sbin/augenrules: No change Jan 20 02:26:21.117878 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:26:21.149369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:26:21.149974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:26:21.162398 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:26:21.163304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:26:21.195924 augenrules[1487]: No rules Jan 20 02:26:21.205166 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:26:21.210362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:26:21.246151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:26:21.268129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:26:21.405960 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:26:21.406397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:26:21.533126 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:26:21.717865 systemd[1]: Finished ensure-sysext.service. Jan 20 02:26:21.849786 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:26:21.895413 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:26:21.895530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:26:21.934956 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:26:21.988774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:26:22.623468 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:26:23.089320 systemd-resolved[1411]: Positive Trust Anchors: Jan 20 02:26:23.089419 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:26:23.089459 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:26:23.145417 systemd-resolved[1411]: Defaulting to hostname 'linux'. Jan 20 02:26:23.155964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:26:23.207294 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:26:23.420420 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 02:26:23.478183 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:26:23.502293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:26:23.652087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:26:23.957666 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:26:24.468872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:26:24.591510 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 02:26:24.618000 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:26:24.634513 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:26:24.783854 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:26:24.882536 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:26:24.883178 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:26:24.964209 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:26:25.092466 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:26:25.153996 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:26:25.196042 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:26:25.200405 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:26:25.207074 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:26:25.219222 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:26:25.266424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:26:25.336862 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:26:25.526465 systemd-networkd[1509]: lo: Link UP Jan 20 02:26:25.526550 systemd-networkd[1509]: lo: Gained carrier Jan 20 02:26:25.551333 systemd-networkd[1509]: Enumeration completed Jan 20 02:26:25.559390 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:26:25.559399 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:26:25.566073 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:26:25.624547 systemd-networkd[1509]: eth0: Link UP Jan 20 02:26:25.625016 systemd-networkd[1509]: eth0: Gained carrier Jan 20 02:26:25.625213 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:26:25.744843 systemd-networkd[1509]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:26:25.750152 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 20 02:26:26.833721 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:26:26.833962 systemd-timesyncd[1510]: Initial clock synchronization to Tue 2026-01-20 02:26:26.833557 UTC. Jan 20 02:26:26.915065 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:26:26.935742 systemd-resolved[1411]: Clock change detected. Flushing caches. Jan 20 02:26:27.274199 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:26:27.402820 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:26:27.507767 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:26:27.621886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:26:27.715863 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:26:27.851820 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:26:27.923620 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:26:28.073920 systemd-networkd[1509]: eth0: Gained IPv6LL Jan 20 02:26:28.269669 systemd[1]: Reached target network.target - Network. Jan 20 02:26:28.326050 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:26:28.382802 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:26:28.409757 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:26:28.409899 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:26:28.442042 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:26:28.528351 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:26:28.606181 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:26:28.753012 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:26:28.849914 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:26:28.902169 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:26:28.927696 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:26:30.206620 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:26:30.287522 jq[1545]: false Jan 20 02:26:30.433765 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:26:30.590951 extend-filesystems[1546]: Found /dev/vda6 Jan 20 02:26:30.689763 extend-filesystems[1546]: Found /dev/vda9 Jan 20 02:26:30.760072 oslogin_cache_refresh[1547]: Refreshing passwd entry cache Jan 20 02:26:30.770964 extend-filesystems[1546]: Checking size of /dev/vda9 Jan 20 02:26:30.965333 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache Jan 20 02:26:30.965333 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting Jan 20 02:26:30.965333 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:26:30.965333 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache Jan 20 02:26:30.891858 oslogin_cache_refresh[1547]: Failure getting users, quitting Jan 20 02:26:30.862281 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:26:30.891893 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:26:30.959646 oslogin_cache_refresh[1547]: Refreshing group entry cache Jan 20 02:26:31.011934 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:26:31.068774 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting Jan 20 02:26:31.068774 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:26:31.046234 oslogin_cache_refresh[1547]: Failure getting groups, quitting Jan 20 02:26:31.046265 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:26:31.120244 extend-filesystems[1546]: Resized partition /dev/vda9 Jan 20 02:26:31.308358 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:26:31.475790 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:26:31.508841 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 02:26:31.669653 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:26:31.898307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:26:32.100677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:26:32.280710 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:26:32.367690 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:26:32.398629 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 02:26:32.404602 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:26:32.631117 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:26:32.757760 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:26:32.757760 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:26:32.757760 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 02:26:33.029805 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Jan 20 02:26:32.799963 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:26:32.872090 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:26:33.135685 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:26:33.334846 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:26:33.337888 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:26:33.406038 update_engine[1574]: I20260120 02:26:33.405895 1574 main.cc:92] Flatcar Update Engine starting Jan 20 02:26:33.924019 jq[1575]: true Jan 20 02:26:34.455726 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:26:34.569280 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:26:34.569816 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:26:34.645789 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:26:34.659032 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:26:34.766927 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:26:34.767726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:26:34.852628 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:26:35.063061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:26:35.146238 systemd-logind[1567]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:26:35.149529 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:26:35.162948 systemd-logind[1567]: New seat seat0. Jan 20 02:26:35.219953 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:26:35.315584 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:26:35.595896 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 02:26:35.640225 jq[1592]: true Jan 20 02:26:35.984626 tar[1589]: linux-amd64/LICENSE Jan 20 02:26:36.062281 tar[1589]: linux-amd64/helm Jan 20 02:26:36.157567 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:26:36.563501 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:26:36.625125 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:26:36.801011 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:26:36.939580 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:26:37.020700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:37.144777 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:26:37.350980 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:41114.service - OpenSSH per-connection server daemon (10.0.0.1:41114). Jan 20 02:26:37.367654 update_engine[1574]: I20260120 02:26:37.360121 1574 update_check_scheduler.cc:74] Next update check in 11m41s Jan 20 02:26:37.368113 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:26:37.299028 dbus-daemon[1543]: [system] SELinux support is enabled Jan 20 02:26:37.447858 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:26:37.594213 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:26:37.742098 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:26:37.751997 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:26:37.752060 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:26:37.844712 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:26:37.844914 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:26:37.845691 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 02:26:37.967897 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:26:38.098088 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:26:38.819008 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:26:38.841990 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:26:39.013983 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:26:39.377142 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:26:39.381531 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:26:39.400871 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:26:40.376891 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:26:40.930013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:26:40.980798 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:26:40.998151 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:26:41.018860 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:26:42.034270 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:26:42.182602 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:42.229362 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:42.274147 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:26:42.301319 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:26:42.821563 systemd-logind[1567]: New session 1 of user core. Jan 20 02:26:43.536009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:26:43.640395 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:26:44.189000 (systemd)[1669]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 02:26:44.316936 systemd-logind[1567]: New session c1 of user core. Jan 20 02:26:47.093597 systemd[1669]: Queued start job for default target default.target. Jan 20 02:26:47.097853 systemd[1669]: Created slice app.slice - User Application Slice. Jan 20 02:26:47.097892 systemd[1669]: Reached target paths.target - Paths. Jan 20 02:26:47.097967 systemd[1669]: Reached target timers.target - Timers. Jan 20 02:26:47.107680 systemd[1669]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:26:47.990051 systemd[1669]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:26:47.990381 systemd[1669]: Reached target sockets.target - Sockets. Jan 20 02:26:47.990617 systemd[1669]: Reached target basic.target - Basic System. Jan 20 02:26:47.990686 systemd[1669]: Reached target default.target - Main User Target. Jan 20 02:26:47.990742 systemd[1669]: Startup finished in 3.209s. Jan 20 02:26:47.998286 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:26:48.103922 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:26:48.345922 containerd[1593]: time="2026-01-20T02:26:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:26:48.353147 containerd[1593]: time="2026-01-20T02:26:48.349966672Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 02:26:48.883601 containerd[1593]: time="2026-01-20T02:26:48.872886653Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="164.647µs" Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.887825617Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.887972342Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.888917045Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.889088194Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.889683015Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.889869452Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:26:48.890696 containerd[1593]: time="2026-01-20T02:26:48.889956425Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:26:48.928938 containerd[1593]: time="2026-01-20T02:26:48.928571591Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:26:48.935047 containerd[1593]: time="2026-01-20T02:26:48.934040719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:26:48.935047 containerd[1593]: time="2026-01-20T02:26:48.934164240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:26:48.935047 containerd[1593]: time="2026-01-20T02:26:48.934178627Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:26:48.949628 containerd[1593]: time="2026-01-20T02:26:48.939379504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:26:48.960928 containerd[1593]: time="2026-01-20T02:26:48.960870588Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:26:48.961763 containerd[1593]: time="2026-01-20T02:26:48.961730774Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:26:48.969581 containerd[1593]: time="2026-01-20T02:26:48.966798313Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:26:48.969581 containerd[1593]: time="2026-01-20T02:26:48.968384553Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:26:48.970327 containerd[1593]: time="2026-01-20T02:26:48.970297664Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:26:48.971602 containerd[1593]: time="2026-01-20T02:26:48.971580458Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:26:48.972378 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:47528.service - OpenSSH per-connection server daemon (10.0.0.1:47528). Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.077672192Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.078123665Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.078158169Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.078182755Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.078284966Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081121831Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081170442Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081646780Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081674502Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081694389Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081712503Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.081735636Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:26:49.088113 containerd[1593]: time="2026-01-20T02:26:49.082080059Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.089963855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090046379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090065996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090081554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090096422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090111481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090125135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090138791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090152417Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.090334537Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.097194271Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.102096100Z" level=info msg="Start snapshots syncer" Jan 20 02:26:49.254684 containerd[1593]: time="2026-01-20T02:26:49.102164979Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:26:49.255159 containerd[1593]: time="2026-01-20T02:26:49.144154634Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:26:49.255159 containerd[1593]: time="2026-01-20T02:26:49.154373728Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.159132961Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.169691368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.169758273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.169776307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.169792698Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.169896772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.169998411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170022747Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170563937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170590767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170703407Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170809466Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170839562Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:26:49.299054 containerd[1593]: time="2026-01-20T02:26:49.170854289Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.170874286Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.170888483Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.170909743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.170945389Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.171054563Z" level=info msg="runtime interface created" Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.171064401Z" level=info msg="created NRI interface" Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.171078498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.171100518Z" level=info msg="Connect containerd service" Jan 20 02:26:49.299694 containerd[1593]: time="2026-01-20T02:26:49.171128621Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:26:49.329616 containerd[1593]: time="2026-01-20T02:26:49.329387755Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:26:51.596954 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 47528 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:51.637311 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:51.765133 tar[1589]: linux-amd64/README.md Jan 20 02:26:51.810691 systemd-logind[1567]: New session 2 of user core. Jan 20 02:26:51.921851 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 02:26:51.944132 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:26:52.012002 containerd[1593]: time="2026-01-20T02:26:52.011803285Z" level=info msg="Start subscribing containerd event" Jan 20 02:26:52.018137 containerd[1593]: time="2026-01-20T02:26:52.012124805Z" level=info msg="Start recovering state" Jan 20 02:26:52.495548 containerd[1593]: time="2026-01-20T02:26:52.478919839Z" level=info msg="Start event monitor" Jan 20 02:26:52.509217 containerd[1593]: time="2026-01-20T02:26:52.503119679Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:26:52.635069 containerd[1593]: time="2026-01-20T02:26:52.602062028Z" level=info msg="Start streaming server" Jan 20 02:26:53.083825 containerd[1593]: time="2026-01-20T02:26:52.899138280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:26:53.130201 containerd[1593]: time="2026-01-20T02:26:53.108205951Z" level=info msg="runtime interface starting up..." Jan 20 02:26:53.130201 containerd[1593]: time="2026-01-20T02:26:53.115868974Z" level=info msg="starting plugins..." Jan 20 02:26:53.130201 containerd[1593]: time="2026-01-20T02:26:53.116008835Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:26:53.130905 containerd[1593]: time="2026-01-20T02:26:52.494658951Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:26:54.393120 containerd[1593]: time="2026-01-20T02:26:54.206144005Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:26:54.604144 containerd[1593]: time="2026-01-20T02:26:54.591091911Z" level=info msg="containerd successfully booted in 6.249673s" Jan 20 02:26:54.615830 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:26:54.790003 sshd[1708]: Connection closed by 10.0.0.1 port 47528 Jan 20 02:26:54.802996 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:55.398975 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:36074.service - OpenSSH per-connection server daemon (10.0.0.1:36074). Jan 20 02:26:55.401005 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:47528.service: Deactivated successfully. Jan 20 02:26:55.596895 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 02:26:55.656124 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Jan 20 02:26:55.781162 systemd-logind[1567]: Removed session 2. Jan 20 02:26:56.626649 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 36074 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:56.648122 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:56.844004 systemd-logind[1567]: New session 3 of user core. Jan 20 02:26:57.021143 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:26:57.769033 sshd[1717]: Connection closed by 10.0.0.1 port 36074 Jan 20 02:26:57.986122 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:58.409726 kernel: kvm_amd: TSC scaling supported Jan 20 02:26:58.409831 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:26:58.409860 kernel: kvm_amd: Nested Paging enabled Jan 20 02:26:58.409881 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:26:58.433869 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:26:58.794909 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:36074.service: Deactivated successfully. Jan 20 02:26:59.068243 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:26:59.192206 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:26:59.225010 systemd-logind[1567]: Removed session 3. Jan 20 02:27:08.220734 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:56248.service - OpenSSH per-connection server daemon (10.0.0.1:56248). Jan 20 02:27:10.801355 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 56248 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:10.827323 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:11.053110 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:27:11.415377 systemd-logind[1567]: New session 4 of user core. Jan 20 02:27:11.467134 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:27:12.144013 sshd[1729]: Connection closed by 10.0.0.1 port 56248 Jan 20 02:27:12.178074 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:12.245017 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:56248.service: Deactivated successfully. Jan 20 02:27:12.319579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:12.323142 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:27:12.361924 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:27:12.398848 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:27:12.422351 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:56280.service - OpenSSH per-connection server daemon (10.0.0.1:56280). Jan 20 02:27:12.432060 systemd[1]: Startup finished in 59.020s (kernel) + 1min 17.021s (initrd) + 1min 52.795s (userspace) = 4min 8.837s. Jan 20 02:27:12.467802 systemd-logind[1567]: Removed session 4. Jan 20 02:27:12.471940 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:13.583824 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 56280 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:13.594557 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:13.791273 systemd-logind[1567]: New session 5 of user core. Jan 20 02:27:13.918293 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:27:14.884117 sshd[1742]: Connection closed by 10.0.0.1 port 56280 Jan 20 02:27:14.902930 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:14.994984 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:56280.service: Deactivated successfully. Jan 20 02:27:15.023779 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:27:15.073900 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:27:15.115654 systemd-logind[1567]: Removed session 5. Jan 20 02:27:22.620659 update_engine[1574]: I20260120 02:27:22.604967 1574 update_attempter.cc:509] Updating boot flags... Jan 20 02:27:23.554201 kubelet[1735]: E0120 02:27:23.554055 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:23.693941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:23.694567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:23.712589 systemd[1]: kubelet.service: Consumed 7.463s CPU time, 267.2M memory peak. Jan 20 02:27:24.987057 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:38376.service - OpenSSH per-connection server daemon (10.0.0.1:38376). Jan 20 02:27:25.391123 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 38376 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:25.397066 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:25.455883 systemd-logind[1567]: New session 6 of user core. Jan 20 02:27:25.493654 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:27:25.858957 sshd[1775]: Connection closed by 10.0.0.1 port 38376 Jan 20 02:27:25.860340 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:25.918943 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:38376.service: Deactivated successfully. Jan 20 02:27:26.025953 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:27:26.074110 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:27:26.110681 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:38388.service - OpenSSH per-connection server daemon (10.0.0.1:38388). Jan 20 02:27:26.181158 systemd-logind[1567]: Removed session 6. Jan 20 02:27:27.161820 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 38388 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:27.209236 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:27.490559 systemd-logind[1567]: New session 7 of user core. Jan 20 02:27:27.625117 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:27:28.266091 sshd[1784]: Connection closed by 10.0.0.1 port 38388 Jan 20 02:27:28.276803 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:28.389180 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:38396.service - OpenSSH per-connection server daemon (10.0.0.1:38396). Jan 20 02:27:28.402930 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:38388.service: Deactivated successfully. Jan 20 02:27:28.442277 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:27:28.463722 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:27:28.527113 systemd-logind[1567]: Removed session 7. Jan 20 02:27:29.522580 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 38396 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:29.554725 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:29.655706 systemd-logind[1567]: New session 8 of user core. Jan 20 02:27:29.684951 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:27:30.003858 sshd[1793]: Connection closed by 10.0.0.1 port 38396 Jan 20 02:27:30.008769 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:30.054937 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:38396.service: Deactivated successfully. Jan 20 02:27:30.071965 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:27:30.082030 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:27:30.190318 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Jan 20 02:27:30.215695 systemd-logind[1567]: Removed session 8. Jan 20 02:27:31.239841 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:31.263865 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:31.499790 systemd-logind[1567]: New session 9 of user core. Jan 20 02:27:31.542710 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:27:32.074062 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:27:32.115963 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:27:33.818214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:27:33.867659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:41.715830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:41.885858 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:42.770859 kubelet[1830]: E0120 02:27:42.751855 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:42.802764 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:27:42.824928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:42.825283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:42.903263 systemd[1]: kubelet.service: Consumed 1.960s CPU time, 111.1M memory peak. Jan 20 02:27:43.009798 (dockerd)[1839]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:27:52.993190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:27:53.056162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:59.262959 dockerd[1839]: time="2026-01-20T02:27:59.252255265Z" level=info msg="Starting up" Jan 20 02:27:59.275230 dockerd[1839]: time="2026-01-20T02:27:59.272882419Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:27:59.860308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:59.930579 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:00.012359 dockerd[1839]: time="2026-01-20T02:28:00.011179473Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:28:00.690621 systemd[1]: var-lib-docker-metacopy\x2dcheck2735645665-merged.mount: Deactivated successfully. Jan 20 02:28:01.040664 dockerd[1839]: time="2026-01-20T02:28:01.040600963Z" level=info msg="Loading containers: start." Jan 20 02:28:01.802612 kernel: Initializing XFRM netlink socket Jan 20 02:28:02.723280 kubelet[1866]: E0120 02:28:02.723014 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:02.806252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:02.820200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:02.858387 systemd[1]: kubelet.service: Consumed 3.016s CPU time, 109.2M memory peak. Jan 20 02:28:13.097267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:28:13.138504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:13.369953 systemd-networkd[1509]: docker0: Link UP Jan 20 02:28:13.545334 dockerd[1839]: time="2026-01-20T02:28:13.544838652Z" level=info msg="Loading containers: done." Jan 20 02:28:14.127010 dockerd[1839]: time="2026-01-20T02:28:14.122726818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:28:14.127010 dockerd[1839]: time="2026-01-20T02:28:14.132964248Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:28:14.127010 dockerd[1839]: time="2026-01-20T02:28:14.133194698Z" level=info msg="Initializing buildkit" Jan 20 02:28:15.133525 dockerd[1839]: time="2026-01-20T02:28:15.127166465Z" level=info msg="Completed buildkit initialization" Jan 20 02:28:15.512920 dockerd[1839]: time="2026-01-20T02:28:15.502934785Z" level=info msg="Daemon has completed initialization" Jan 20 02:28:15.512920 dockerd[1839]: time="2026-01-20T02:28:15.509034510Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:28:15.606186 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:28:16.249604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:16.367641 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:17.792037 kubelet[2075]: E0120 02:28:17.791526 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:17.817915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:17.818197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:17.830060 systemd[1]: kubelet.service: Consumed 1.171s CPU time, 110.8M memory peak. Jan 20 02:28:26.488230 containerd[1593]: time="2026-01-20T02:28:26.485252201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 02:28:28.085374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:28:28.135769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:30.670630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:30.788342 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:31.228282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702211791.mount: Deactivated successfully. Jan 20 02:28:31.622926 kubelet[2101]: E0120 02:28:31.622817 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:31.639841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:31.648608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:31.650954 systemd[1]: kubelet.service: Consumed 812ms CPU time, 108.8M memory peak. Jan 20 02:28:41.781649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:28:41.846829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:44.232890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:44.384128 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:47.444671 kubelet[2173]: E0120 02:28:47.444009 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:47.472765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:47.473018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:47.474558 systemd[1]: kubelet.service: Consumed 1.212s CPU time, 110.8M memory peak. Jan 20 02:28:57.166737 containerd[1593]: time="2026-01-20T02:28:57.165277879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:57.182696 containerd[1593]: time="2026-01-20T02:28:57.178190902Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 02:28:57.191008 containerd[1593]: time="2026-01-20T02:28:57.190938648Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:57.278578 containerd[1593]: time="2026-01-20T02:28:57.273243989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:57.291712 containerd[1593]: time="2026-01-20T02:28:57.289799475Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 30.803752041s" Jan 20 02:28:57.291712 containerd[1593]: time="2026-01-20T02:28:57.289926051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 02:28:57.371561 containerd[1593]: time="2026-01-20T02:28:57.367682676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 02:28:57.644310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:28:57.882938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:02.836367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:03.434318 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:29:06.719166 kubelet[2190]: E0120 02:29:06.714638 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:29:06.830854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:29:06.831185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:29:06.841170 systemd[1]: kubelet.service: Consumed 1.772s CPU time, 110.7M memory peak. Jan 20 02:29:17.193217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:29:17.222303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:23.614885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:23.717594 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:29:24.971901 kubelet[2210]: E0120 02:29:24.971668 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:29:25.004649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:29:25.009585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:29:25.019084 systemd[1]: kubelet.service: Consumed 1.556s CPU time, 110.3M memory peak. Jan 20 02:29:32.401063 containerd[1593]: time="2026-01-20T02:29:32.397225638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:32.426205 containerd[1593]: time="2026-01-20T02:29:32.426144689Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 02:29:32.456280 containerd[1593]: time="2026-01-20T02:29:32.456171002Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:32.524198 containerd[1593]: time="2026-01-20T02:29:32.506370683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:32.530623 containerd[1593]: time="2026-01-20T02:29:32.508845401Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 35.141007676s" Jan 20 02:29:32.530623 containerd[1593]: time="2026-01-20T02:29:32.529669983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 02:29:32.567642 containerd[1593]: time="2026-01-20T02:29:32.553005519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 02:29:35.060722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:29:35.111304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:37.159741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:37.258759 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:29:51.598078 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2340025582 wd_nsec: 2340024527 Jan 20 02:29:53.470904 kubelet[2230]: E0120 02:29:53.469039 2230 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:29:53.480879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:29:53.484928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:29:53.511982 systemd[1]: kubelet.service: Consumed 2.876s CPU time, 110.6M memory peak. Jan 20 02:30:03.916071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:30:03.975079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:30:13.680241 containerd[1593]: time="2026-01-20T02:30:13.675090621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:30:13.737006 containerd[1593]: time="2026-01-20T02:30:13.731748659Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 02:30:14.118675 containerd[1593]: time="2026-01-20T02:30:14.117827430Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:30:14.230224 containerd[1593]: time="2026-01-20T02:30:14.230119027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:30:14.269517 containerd[1593]: time="2026-01-20T02:30:14.244944223Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 41.691805595s" Jan 20 02:30:14.269517 containerd[1593]: time="2026-01-20T02:30:14.245116735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 02:30:14.386056 containerd[1593]: time="2026-01-20T02:30:14.371755313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 02:30:15.731182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:30:15.907009 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:30:21.684845 kubelet[2247]: E0120 02:30:21.684141 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:30:21.726331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:30:21.735755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:30:21.752011 systemd[1]: kubelet.service: Consumed 5.491s CPU time, 110.7M memory peak. Jan 20 02:30:31.794945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:30:31.864022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:30:36.698558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:30:36.821300 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:30:37.027163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036984807.mount: Deactivated successfully. Jan 20 02:30:39.442149 kubelet[2268]: E0120 02:30:39.439059 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:30:39.525298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:30:39.525939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:30:39.562042 systemd[1]: kubelet.service: Consumed 1.751s CPU time, 108.3M memory peak. Jan 20 02:30:49.613201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:30:49.648936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:30:55.042332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:30:55.346187 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:30:57.733731 kubelet[2292]: E0120 02:30:57.731345 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:30:57.787543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:30:57.787980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:30:57.803386 systemd[1]: kubelet.service: Consumed 1.720s CPU time, 110.6M memory peak. Jan 20 02:30:59.003270 containerd[1593]: time="2026-01-20T02:30:58.994573394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:30:59.012693 containerd[1593]: time="2026-01-20T02:30:59.012622115Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 02:30:59.094692 containerd[1593]: time="2026-01-20T02:30:59.083721035Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:30:59.160779 containerd[1593]: time="2026-01-20T02:30:59.144138241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:30:59.191268 containerd[1593]: time="2026-01-20T02:30:59.170561794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 44.798734987s" Jan 20 02:30:59.191268 containerd[1593]: time="2026-01-20T02:30:59.170626565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 02:30:59.332204 containerd[1593]: time="2026-01-20T02:30:59.329015592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 02:31:02.527182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592245737.mount: Deactivated successfully. Jan 20 02:31:08.084607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 02:31:08.148798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:31:09.033624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:31:09.198214 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:31:12.983654 kubelet[2320]: E0120 02:31:12.982876 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:31:12.998197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:31:12.998734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:31:13.010314 systemd[1]: kubelet.service: Consumed 1.784s CPU time, 110.5M memory peak. Jan 20 02:31:23.084032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 02:31:23.311371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:31:29.668355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:31:30.133325 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:31:33.817842 kubelet[2377]: E0120 02:31:33.812628 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:31:33.842748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:31:33.864836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:31:33.875189 systemd[1]: kubelet.service: Consumed 2.023s CPU time, 112.2M memory peak. Jan 20 02:31:34.947682 containerd[1593]: time="2026-01-20T02:31:34.947059760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:31:35.022379 containerd[1593]: time="2026-01-20T02:31:35.022114681Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 02:31:35.075581 containerd[1593]: time="2026-01-20T02:31:35.059579701Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:31:35.088143 containerd[1593]: time="2026-01-20T02:31:35.077113955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:31:35.088143 containerd[1593]: time="2026-01-20T02:31:35.086946315Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 35.757787776s" Jan 20 02:31:35.088143 containerd[1593]: time="2026-01-20T02:31:35.086997571Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 02:31:35.108152 containerd[1593]: time="2026-01-20T02:31:35.106720168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 02:31:37.695699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578345741.mount: Deactivated successfully. Jan 20 02:31:38.100774 containerd[1593]: time="2026-01-20T02:31:38.100388382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:31:38.123255 containerd[1593]: time="2026-01-20T02:31:38.120660165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 02:31:38.174951 containerd[1593]: time="2026-01-20T02:31:38.164169034Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:31:38.256590 containerd[1593]: time="2026-01-20T02:31:38.246676804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:31:38.279268 containerd[1593]: time="2026-01-20T02:31:38.276905419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.170122726s" Jan 20 02:31:38.279268 containerd[1593]: time="2026-01-20T02:31:38.276971483Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 02:31:38.342210 containerd[1593]: time="2026-01-20T02:31:38.342156706Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 02:31:41.537132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788353419.mount: Deactivated successfully. Jan 20 02:31:44.258155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 02:31:44.481120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:31:49.488903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:31:49.712310 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:31:52.849922 kubelet[2413]: E0120 02:31:52.823256 2413 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:31:52.883247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:31:52.898663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:31:52.902998 systemd[1]: kubelet.service: Consumed 1.724s CPU time, 112.4M memory peak. Jan 20 02:32:03.142372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 02:32:03.276303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:08.494771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:32:08.575267 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:32:10.614846 kubelet[2472]: E0120 02:32:10.608722 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:32:10.632957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:32:10.633287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:32:10.634109 systemd[1]: kubelet.service: Consumed 1.234s CPU time, 110.2M memory peak. Jan 20 02:32:20.772646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 20 02:32:20.817506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:23.852020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:32:23.929316 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:32:27.894546 kubelet[2493]: E0120 02:32:27.893259 2493 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:32:27.915118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:32:27.917394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:32:27.942171 systemd[1]: kubelet.service: Consumed 1.267s CPU time, 110.6M memory peak. Jan 20 02:32:42.597565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Jan 20 02:32:42.949106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:43.645770 containerd[1593]: time="2026-01-20T02:32:43.645625759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:32:43.713769 containerd[1593]: time="2026-01-20T02:32:43.708724097Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 02:32:43.726157 containerd[1593]: time="2026-01-20T02:32:43.722644109Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:32:43.833214 containerd[1593]: time="2026-01-20T02:32:43.829859719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:32:43.889082 containerd[1593]: time="2026-01-20T02:32:43.888380627Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1m5.545709451s" Jan 20 02:32:43.889082 containerd[1593]: time="2026-01-20T02:32:43.888618090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 02:32:47.074390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:32:47.172361 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:32:48.104750 kubelet[2520]: E0120 02:32:48.104325 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:32:48.118170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:32:48.119347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:32:48.120332 systemd[1]: kubelet.service: Consumed 1.000s CPU time, 109.9M memory peak. Jan 20 02:32:58.308739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Jan 20 02:32:58.388864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:59.809342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:00.086201 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:33:00.887977 kubelet[2546]: E0120 02:33:00.885717 2546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:33:00.913667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:33:00.914137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:33:00.928776 systemd[1]: kubelet.service: Consumed 710ms CPU time, 110.7M memory peak. Jan 20 02:33:07.473881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:07.481653 systemd[1]: kubelet.service: Consumed 710ms CPU time, 110.7M memory peak. Jan 20 02:33:07.511315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:33:07.813649 systemd[1]: Reload requested from client PID 2563 ('systemctl') (unit session-9.scope)... Jan 20 02:33:07.813752 systemd[1]: Reloading... Jan 20 02:33:08.758526 zram_generator::config[2602]: No configuration found. Jan 20 02:33:10.021860 systemd[1]: Reloading finished in 2195 ms. Jan 20 02:33:10.464864 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 02:33:10.465072 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 02:33:10.465797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:10.465873 systemd[1]: kubelet.service: Consumed 379ms CPU time, 98.5M memory peak. Jan 20 02:33:10.499996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:33:12.042627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:12.181876 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:33:13.149390 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:33:13.149390 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:33:13.149390 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:33:13.155342 kubelet[2654]: I0120 02:33:13.150844 2654 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:33:14.285595 kubelet[2654]: I0120 02:33:14.284784 2654 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:33:14.285595 kubelet[2654]: I0120 02:33:14.284857 2654 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:33:14.292374 kubelet[2654]: I0120 02:33:14.286904 2654 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:33:14.599839 kubelet[2654]: I0120 02:33:14.590083 2654 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:33:14.601976 kubelet[2654]: E0120 02:33:14.601174 2654 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:14.785633 kubelet[2654]: I0120 02:33:14.781932 2654 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:33:14.936360 kubelet[2654]: I0120 02:33:14.933968 2654 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:33:14.945954 kubelet[2654]: I0120 02:33:14.941923 2654 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:33:14.950017 kubelet[2654]: I0120 02:33:14.944717 2654 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:33:14.950017 kubelet[2654]: I0120 02:33:14.949096 2654 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:33:14.954142 kubelet[2654]: I0120 02:33:14.952011 2654 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:33:14.954142 kubelet[2654]: I0120 02:33:14.952709 2654 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:33:14.976142 kubelet[2654]: I0120 02:33:14.974001 2654 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:33:14.979631 kubelet[2654]: I0120 02:33:14.977025 2654 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:33:14.979631 kubelet[2654]: I0120 02:33:14.977169 2654 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:33:14.979631 kubelet[2654]: I0120 02:33:14.977552 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:33:15.021722 kubelet[2654]: W0120 02:33:15.000016 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:15.021722 kubelet[2654]: E0120 02:33:15.018780 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:15.021722 kubelet[2654]: W0120 02:33:15.011756 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:15.021722 kubelet[2654]: E0120 02:33:15.018855 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:15.021722 kubelet[2654]: I0120 02:33:15.020557 2654 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:33:15.029769 kubelet[2654]: I0120 02:33:15.029734 2654 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:33:15.041130 kubelet[2654]: W0120 02:33:15.037344 2654 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:33:15.076598 kubelet[2654]: I0120 02:33:15.076306 2654 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:33:15.078580 kubelet[2654]: I0120 02:33:15.078555 2654 server.go:1287] "Started kubelet" Jan 20 02:33:15.088621 kubelet[2654]: I0120 02:33:15.079780 2654 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:33:15.104883 kubelet[2654]: I0120 02:33:15.103128 2654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:33:15.110279 kubelet[2654]: I0120 02:33:15.106119 2654 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:33:15.110279 kubelet[2654]: I0120 02:33:15.108652 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:33:15.118795 kubelet[2654]: I0120 02:33:15.114778 2654 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:33:15.118795 kubelet[2654]: E0120 02:33:15.115088 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.124569 kubelet[2654]: I0120 02:33:15.119830 2654 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:33:15.124569 kubelet[2654]: I0120 02:33:15.120332 2654 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:33:15.131976 kubelet[2654]: E0120 02:33:15.131936 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Jan 20 02:33:15.151630 kubelet[2654]: I0120 02:33:15.147005 2654 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:33:15.151630 kubelet[2654]: I0120 02:33:15.147360 2654 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:33:15.154821 kubelet[2654]: W0120 02:33:15.154652 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:15.154821 kubelet[2654]: E0120 02:33:15.154724 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:15.173380 kubelet[2654]: I0120 02:33:15.157710 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:33:15.173380 kubelet[2654]: I0120 02:33:15.160577 2654 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:33:15.173380 kubelet[2654]: I0120 02:33:15.161378 2654 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:33:15.209012 kubelet[2654]: E0120 02:33:15.175043 2654 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4fb59c196933 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:33:15.076348211 +0000 UTC m=+2.785695877,LastTimestamp:2026-01-20 02:33:15.076348211 +0000 UTC m=+2.785695877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:33:15.223955 kubelet[2654]: E0120 02:33:15.223918 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.236554 kubelet[2654]: E0120 02:33:15.235710 2654 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:33:15.330158 kubelet[2654]: E0120 02:33:15.330110 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.340923 kubelet[2654]: E0120 02:33:15.340866 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Jan 20 02:33:15.392569 kubelet[2654]: I0120 02:33:15.390901 2654 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:33:15.392569 kubelet[2654]: I0120 02:33:15.390930 2654 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:33:15.392569 kubelet[2654]: I0120 02:33:15.390957 2654 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:33:15.431176 kubelet[2654]: I0120 02:33:15.429025 2654 policy_none.go:49] "None policy: Start" Jan 20 02:33:15.431176 kubelet[2654]: I0120 02:33:15.429150 2654 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:33:15.431176 kubelet[2654]: I0120 02:33:15.429291 2654 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:33:15.454718 kubelet[2654]: E0120 02:33:15.447732 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.515719 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:33:15.552561 kubelet[2654]: E0120 02:33:15.551625 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.658555 kubelet[2654]: E0120 02:33:15.657123 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.683106 kubelet[2654]: I0120 02:33:15.678186 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:33:15.698711 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:33:15.738616 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:33:15.741904 kubelet[2654]: I0120 02:33:15.740084 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:33:15.741904 kubelet[2654]: I0120 02:33:15.740295 2654 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:33:15.741904 kubelet[2654]: I0120 02:33:15.740341 2654 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:33:15.741904 kubelet[2654]: I0120 02:33:15.740355 2654 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:33:15.741904 kubelet[2654]: E0120 02:33:15.740746 2654 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:33:15.755755 kubelet[2654]: W0120 02:33:15.754721 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:15.755755 kubelet[2654]: E0120 02:33:15.755557 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:15.763936 kubelet[2654]: E0120 02:33:15.756113 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Jan 20 02:33:15.763936 kubelet[2654]: E0120 02:33:15.763653 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:33:15.810975 kubelet[2654]: I0120 02:33:15.810932 2654 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:33:15.811685 kubelet[2654]: I0120 02:33:15.811661 2654 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:33:15.811850 kubelet[2654]: I0120 02:33:15.811805 2654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:33:15.831711 kubelet[2654]: I0120 02:33:15.821549 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:33:15.835669 kubelet[2654]: E0120 02:33:15.835634 2654 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:33:15.846312 kubelet[2654]: E0120 02:33:15.846184 2654 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:33:15.946709 kubelet[2654]: I0120 02:33:15.940817 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:15.946709 kubelet[2654]: E0120 02:33:15.941890 2654 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 20 02:33:15.971617 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 02:33:15.975335 kubelet[2654]: I0120 02:33:15.975297 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:33:15.975701 kubelet[2654]: I0120 02:33:15.975675 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d428743870a32f6b993b2ebfdd5780e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d428743870a32f6b993b2ebfdd5780e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:33:15.975815 kubelet[2654]: I0120 02:33:15.975797 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:15.975906 kubelet[2654]: I0120 02:33:15.975887 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:15.976610 kubelet[2654]: I0120 02:33:15.976590 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:15.977891 kubelet[2654]: I0120 02:33:15.977348 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d428743870a32f6b993b2ebfdd5780e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d428743870a32f6b993b2ebfdd5780e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:33:15.977891 kubelet[2654]: I0120 02:33:15.977763 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d428743870a32f6b993b2ebfdd5780e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d428743870a32f6b993b2ebfdd5780e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:33:15.977891 kubelet[2654]: I0120 02:33:15.977793 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:15.977891 kubelet[2654]: I0120 02:33:15.977822 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:16.020553 kubelet[2654]: E0120 02:33:16.019731 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:16.069814 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 02:33:16.091309 kubelet[2654]: E0120 02:33:16.091181 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:16.162950 kubelet[2654]: I0120 02:33:16.162914 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:16.164398 kubelet[2654]: E0120 02:33:16.164357 2654 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 20 02:33:16.192073 systemd[1]: Created slice kubepods-burstable-pod0d428743870a32f6b993b2ebfdd5780e.slice - libcontainer container kubepods-burstable-pod0d428743870a32f6b993b2ebfdd5780e.slice. Jan 20 02:33:16.205639 kubelet[2654]: E0120 02:33:16.204036 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:16.205639 kubelet[2654]: E0120 02:33:16.204881 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:16.209917 containerd[1593]: time="2026-01-20T02:33:16.206755863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d428743870a32f6b993b2ebfdd5780e,Namespace:kube-system,Attempt:0,}" Jan 20 02:33:16.287821 kubelet[2654]: W0120 02:33:16.287744 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:16.288059 kubelet[2654]: E0120 02:33:16.288031 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:16.331782 kubelet[2654]: E0120 02:33:16.330582 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:16.339326 containerd[1593]: time="2026-01-20T02:33:16.336053629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 02:33:16.393106 kubelet[2654]: E0120 02:33:16.392751 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:16.410641 containerd[1593]: time="2026-01-20T02:33:16.402784198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 02:33:16.493693 kubelet[2654]: W0120 02:33:16.493344 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:16.493851 kubelet[2654]: E0120 02:33:16.493719 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:16.602861 kubelet[2654]: E0120 02:33:16.599670 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="1.6s" Jan 20 02:33:16.625057 kubelet[2654]: I0120 02:33:16.620140 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:16.635857 kubelet[2654]: E0120 02:33:16.627827 2654 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 20 02:33:16.712606 kubelet[2654]: W0120 02:33:16.710781 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:16.712606 kubelet[2654]: E0120 02:33:16.710952 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:16.749685 kubelet[2654]: W0120 02:33:16.749123 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:16.760707 kubelet[2654]: E0120 02:33:16.749205 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:16.773551 kubelet[2654]: E0120 02:33:16.765829 2654 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:16.893824 containerd[1593]: time="2026-01-20T02:33:16.877810755Z" level=info msg="connecting to shim 31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3" address="unix:///run/containerd/s/e1417e6043ae04f23791a54ec6e38e662fe67d1333ffc83e8c33b973b921b71c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:33:16.965936 containerd[1593]: time="2026-01-20T02:33:16.965879175Z" level=info msg="connecting to shim 398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1" address="unix:///run/containerd/s/98f14c115b348dcb074877b683a3cedb9d01dbbe6f5f5a9daeb8c0026a4ef212" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:33:17.193043 containerd[1593]: time="2026-01-20T02:33:17.181185704Z" level=info msg="connecting to shim a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406" address="unix:///run/containerd/s/cf37a2070286bfbbfcc7c63fe29a7aa6bc535ad25ea209e8ed2853964177c2fc" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:33:17.459968 kubelet[2654]: I0120 02:33:17.443086 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:17.459968 kubelet[2654]: E0120 02:33:17.443899 2654 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 20 02:33:17.652092 systemd[1]: Started cri-containerd-31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3.scope - libcontainer container 31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3. Jan 20 02:33:17.900905 systemd[1]: Started cri-containerd-a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406.scope - libcontainer container a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406. Jan 20 02:33:17.917036 systemd[1]: Started cri-containerd-398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1.scope - libcontainer container 398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1. Jan 20 02:33:18.226963 kubelet[2654]: E0120 02:33:18.219919 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="3.2s" Jan 20 02:33:18.251747 kubelet[2654]: W0120 02:33:18.251698 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:18.252101 kubelet[2654]: E0120 02:33:18.252064 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:18.629880 kubelet[2654]: W0120 02:33:18.629130 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:18.629880 kubelet[2654]: E0120 02:33:18.629192 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:19.071369 kubelet[2654]: I0120 02:33:19.066679 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:19.094696 kubelet[2654]: E0120 02:33:19.079180 2654 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 20 02:33:19.345111 containerd[1593]: time="2026-01-20T02:33:19.338982845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d428743870a32f6b993b2ebfdd5780e,Namespace:kube-system,Attempt:0,} returns sandbox id \"31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3\"" Jan 20 02:33:19.367510 kubelet[2654]: E0120 02:33:19.362886 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:19.428772 containerd[1593]: time="2026-01-20T02:33:19.394389809Z" level=info msg="CreateContainer within sandbox \"31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:33:19.537997 kubelet[2654]: W0120 02:33:19.537952 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:19.550654 kubelet[2654]: E0120 02:33:19.538228 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:19.562616 containerd[1593]: time="2026-01-20T02:33:19.552779186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\"" Jan 20 02:33:19.562780 kubelet[2654]: E0120 02:33:19.558169 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:19.590771 containerd[1593]: time="2026-01-20T02:33:19.582350525Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:33:19.670999 kubelet[2654]: E0120 02:33:19.659130 2654 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4fb59c196933 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:33:15.076348211 +0000 UTC m=+2.785695877,LastTimestamp:2026-01-20 02:33:15.076348211 +0000 UTC m=+2.785695877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:33:19.696202 containerd[1593]: time="2026-01-20T02:33:19.662677085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\"" Jan 20 02:33:19.696202 containerd[1593]: time="2026-01-20T02:33:19.685093490Z" level=info msg="Container 51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:33:19.656072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977385280.mount: Deactivated successfully. Jan 20 02:33:19.704636 kubelet[2654]: E0120 02:33:19.673369 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:19.720888 containerd[1593]: time="2026-01-20T02:33:19.714978674Z" level=info msg="CreateContainer within sandbox \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:33:19.734171 kubelet[2654]: W0120 02:33:19.734117 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:19.739902 kubelet[2654]: E0120 02:33:19.739861 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:19.864079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266912462.mount: Deactivated successfully. Jan 20 02:33:19.876716 containerd[1593]: time="2026-01-20T02:33:19.869995977Z" level=info msg="Container 8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:33:19.946026 containerd[1593]: time="2026-01-20T02:33:19.939986679Z" level=info msg="CreateContainer within sandbox \"31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c\"" Jan 20 02:33:19.961644 containerd[1593]: time="2026-01-20T02:33:19.958193157Z" level=info msg="Container eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:33:19.961644 containerd[1593]: time="2026-01-20T02:33:19.961200180Z" level=info msg="StartContainer for \"51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c\"" Jan 20 02:33:19.978213 containerd[1593]: time="2026-01-20T02:33:19.978042293Z" level=info msg="connecting to shim 51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c" address="unix:///run/containerd/s/e1417e6043ae04f23791a54ec6e38e662fe67d1333ffc83e8c33b973b921b71c" protocol=ttrpc version=3 Jan 20 02:33:19.990097 containerd[1593]: time="2026-01-20T02:33:19.988021264Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\"" Jan 20 02:33:20.005891 containerd[1593]: time="2026-01-20T02:33:20.003096418Z" level=info msg="StartContainer for \"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\"" Jan 20 02:33:20.040736 containerd[1593]: time="2026-01-20T02:33:20.035160569Z" level=info msg="connecting to shim 8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3" address="unix:///run/containerd/s/98f14c115b348dcb074877b683a3cedb9d01dbbe6f5f5a9daeb8c0026a4ef212" protocol=ttrpc version=3 Jan 20 02:33:20.366725 containerd[1593]: time="2026-01-20T02:33:20.364166011Z" level=info msg="CreateContainer within sandbox \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\"" Jan 20 02:33:20.393781 containerd[1593]: time="2026-01-20T02:33:20.388874794Z" level=info msg="StartContainer for \"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\"" Jan 20 02:33:20.442023 systemd[1]: Started cri-containerd-51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c.scope - libcontainer container 51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c. Jan 20 02:33:20.509030 containerd[1593]: time="2026-01-20T02:33:20.506983497Z" level=info msg="connecting to shim eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8" address="unix:///run/containerd/s/cf37a2070286bfbbfcc7c63fe29a7aa6bc535ad25ea209e8ed2853964177c2fc" protocol=ttrpc version=3 Jan 20 02:33:20.547106 systemd[1]: Started cri-containerd-8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3.scope - libcontainer container 8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3. Jan 20 02:33:21.122787 systemd[1]: Started cri-containerd-eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8.scope - libcontainer container eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8. Jan 20 02:33:21.168679 kubelet[2654]: E0120 02:33:21.160892 2654 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:21.456609 kubelet[2654]: E0120 02:33:21.436735 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="6.4s" Jan 20 02:33:21.802678 kubelet[2654]: W0120 02:33:21.802103 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jan 20 02:33:21.802678 kubelet[2654]: E0120 02:33:21.802584 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:33:22.110935 containerd[1593]: time="2026-01-20T02:33:22.110093869Z" level=info msg="StartContainer for \"51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c\" returns successfully" Jan 20 02:33:22.424928 containerd[1593]: time="2026-01-20T02:33:22.408234389Z" level=info msg="StartContainer for \"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\" returns successfully" Jan 20 02:33:22.460899 kubelet[2654]: I0120 02:33:22.460859 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:22.477786 kubelet[2654]: E0120 02:33:22.477738 2654 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jan 20 02:33:22.608923 containerd[1593]: time="2026-01-20T02:33:22.608729286Z" level=info msg="StartContainer for \"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\" returns successfully" Jan 20 02:33:23.032650 kubelet[2654]: E0120 02:33:23.028707 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:23.032650 kubelet[2654]: E0120 02:33:23.031683 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:23.202663 kubelet[2654]: E0120 02:33:23.198197 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:23.204677 kubelet[2654]: E0120 02:33:23.204068 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:23.291934 kubelet[2654]: E0120 02:33:23.288666 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:23.291934 kubelet[2654]: E0120 02:33:23.288860 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:24.296922 kubelet[2654]: E0120 02:33:24.295393 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:24.296922 kubelet[2654]: E0120 02:33:24.295965 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:24.301621 kubelet[2654]: E0120 02:33:24.299647 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:24.301621 kubelet[2654]: E0120 02:33:24.299786 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:24.302657 kubelet[2654]: E0120 02:33:24.302634 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:24.303139 kubelet[2654]: E0120 02:33:24.303114 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:25.297590 kubelet[2654]: E0120 02:33:25.293567 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:25.297590 kubelet[2654]: E0120 02:33:25.293761 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:25.304686 kubelet[2654]: E0120 02:33:25.304169 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:25.304686 kubelet[2654]: E0120 02:33:25.304600 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:25.867080 kubelet[2654]: E0120 02:33:25.867034 2654 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:33:28.971883 kubelet[2654]: I0120 02:33:28.971753 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:33:29.018265 kubelet[2654]: E0120 02:33:29.018047 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:29.041763 kubelet[2654]: E0120 02:33:29.041627 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:30.128286 kubelet[2654]: E0120 02:33:30.128159 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:30.138067 kubelet[2654]: E0120 02:33:30.137870 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:32.657769 kubelet[2654]: E0120 02:33:32.652288 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:32.658767 kubelet[2654]: E0120 02:33:32.658651 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:33.630848 kubelet[2654]: E0120 02:33:33.627142 2654 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:33:33.630848 kubelet[2654]: E0120 02:33:33.627332 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:33.771253 kubelet[2654]: W0120 02:33:33.769719 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:33:33.771253 kubelet[2654]: E0120 02:33:33.771207 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:33:34.547264 kubelet[2654]: W0120 02:33:34.543213 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:33:34.547264 kubelet[2654]: E0120 02:33:34.543310 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:33:35.236895 kubelet[2654]: W0120 02:33:35.236814 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:33:35.242866 kubelet[2654]: E0120 02:33:35.242826 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:33:35.912785 kubelet[2654]: E0120 02:33:35.906903 2654 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:33:35.973896 kubelet[2654]: E0120 02:33:35.973849 2654 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 02:33:36.074796 kubelet[2654]: E0120 02:33:36.074353 2654 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4fb59c196933 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:33:15.076348211 +0000 UTC m=+2.785695877,LastTimestamp:2026-01-20 02:33:15.076348211 +0000 UTC m=+2.785695877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:33:36.111302 kubelet[2654]: I0120 02:33:36.111149 2654 apiserver.go:52] "Watching apiserver" Jan 20 02:33:36.158793 kubelet[2654]: I0120 02:33:36.149899 2654 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:33:36.158793 kubelet[2654]: E0120 02:33:36.149962 2654 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:33:36.224183 kubelet[2654]: I0120 02:33:36.219934 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:36.229611 kubelet[2654]: I0120 02:33:36.225040 2654 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:33:36.330842 kubelet[2654]: E0120 02:33:36.330701 2654 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4fb5a598b93f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:33:15.235686719 +0000 UTC m=+2.945034385,LastTimestamp:2026-01-20 02:33:15.235686719 +0000 UTC m=+2.945034385,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:33:36.558737 kubelet[2654]: E0120 02:33:36.531232 2654 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:33:36.558737 kubelet[2654]: I0120 02:33:36.531265 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:33:36.559261 kubelet[2654]: E0120 02:33:36.559141 2654 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4fb5ad98444c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:33:15.369874508 +0000 UTC m=+3.079222175,LastTimestamp:2026-01-20 02:33:15.369874508 +0000 UTC m=+3.079222175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:33:36.628149 kubelet[2654]: E0120 02:33:36.619825 2654 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 02:33:36.628149 kubelet[2654]: I0120 02:33:36.619869 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:33:36.663699 kubelet[2654]: E0120 02:33:36.663642 2654 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 02:33:39.467976 kubelet[2654]: I0120 02:33:39.465342 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:33:39.633829 kubelet[2654]: E0120 02:33:39.629134 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:40.452022 kubelet[2654]: E0120 02:33:40.451269 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:43.274332 kubelet[2654]: E0120 02:33:43.269842 2654 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Jan 20 02:33:46.601289 kubelet[2654]: I0120 02:33:46.594920 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.594769151 podStartE2EDuration="7.594769151s" podCreationTimestamp="2026-01-20 02:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:33:46.557268359 +0000 UTC m=+34.266616025" watchObservedRunningTime="2026-01-20 02:33:46.594769151 +0000 UTC m=+34.304116827" Jan 20 02:34:06.049331 kubelet[2654]: E0120 02:34:06.020911 2654 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.786s" Jan 20 02:34:07.071924 systemd[1]: cri-containerd-8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3.scope: Deactivated successfully. Jan 20 02:34:07.073079 systemd[1]: cri-containerd-8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3.scope: Consumed 3.182s CPU time, 17.5M memory peak. Jan 20 02:34:07.780321 containerd[1593]: time="2026-01-20T02:34:07.719951449Z" level=info msg="received container exit event container_id:\"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\" id:\"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\" pid:2865 exit_status:1 exited_at:{seconds:1768876447 nanos:598914169}" Jan 20 02:34:08.603035 kubelet[2654]: E0120 02:34:08.602908 2654 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.542s" Jan 20 02:34:13.374630 kubelet[2654]: E0120 02:34:13.357397 2654 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.551s" Jan 20 02:34:14.372636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3-rootfs.mount: Deactivated successfully. Jan 20 02:34:18.019226 kubelet[2654]: E0120 02:34:18.014929 2654 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.994s" Jan 20 02:34:19.045926 kubelet[2654]: E0120 02:34:19.033777 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:19.873717 kubelet[2654]: E0120 02:34:19.870564 2654 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.855s" Jan 20 02:34:20.248011 kubelet[2654]: I0120 02:34:20.243311 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:34:20.615692 kubelet[2654]: I0120 02:34:20.615217 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:34:20.817552 kubelet[2654]: I0120 02:34:20.812643 2654 scope.go:117] "RemoveContainer" containerID="8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3" Jan 20 02:34:20.817552 kubelet[2654]: E0120 02:34:20.815647 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:21.218145 kubelet[2654]: E0120 02:34:21.218062 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:21.300331 containerd[1593]: time="2026-01-20T02:34:21.300207352Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 02:34:21.493333 kubelet[2654]: E0120 02:34:21.492098 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:22.062983 containerd[1593]: time="2026-01-20T02:34:22.001381470Z" level=info msg="Container 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:34:22.486321 containerd[1593]: time="2026-01-20T02:34:22.458800837Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3\"" Jan 20 02:34:22.508701 containerd[1593]: time="2026-01-20T02:34:22.499576794Z" level=info msg="StartContainer for \"88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3\"" Jan 20 02:34:22.646803 containerd[1593]: time="2026-01-20T02:34:22.635701571Z" level=info msg="connecting to shim 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3" address="unix:///run/containerd/s/98f14c115b348dcb074877b683a3cedb9d01dbbe6f5f5a9daeb8c0026a4ef212" protocol=ttrpc version=3 Jan 20 02:34:24.033337 systemd[1]: Reload requested from client PID 2957 ('systemctl') (unit session-9.scope)... Jan 20 02:34:24.033760 systemd[1]: Reloading... Jan 20 02:34:26.175257 kubelet[2654]: I0120 02:34:26.172640 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.172617835 podStartE2EDuration="6.172617835s" podCreationTimestamp="2026-01-20 02:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:34:26.168384193 +0000 UTC m=+73.877731880" watchObservedRunningTime="2026-01-20 02:34:26.172617835 +0000 UTC m=+73.881965501" Jan 20 02:34:27.006009 zram_generator::config[3012]: No configuration found. Jan 20 02:34:32.925012 systemd[1]: Reloading finished in 8875 ms. Jan 20 02:34:34.010575 systemd[1]: Started cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope - libcontainer container 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3. Jan 20 02:34:34.020143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:34:34.221919 containerd[1593]: time="2026-01-20T02:34:34.208035637Z" level=error msg="get state for 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3" error="context canceled" Jan 20 02:34:34.221919 containerd[1593]: time="2026-01-20T02:34:34.208097590Z" level=warning msg="unknown status" status=0 Jan 20 02:34:34.304206 containerd[1593]: time="2026-01-20T02:34:34.294225182Z" level=error msg="collecting metrics for 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3" error="context canceled" Jan 20 02:34:34.310952 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:34:34.319186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:34:34.319297 systemd[1]: kubelet.service: Consumed 12.932s CPU time, 140.2M memory peak. Jan 20 02:34:34.377969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:34:35.479372 containerd[1593]: time="2026-01-20T02:34:35.458196369Z" level=error msg="ttrpc: received message on inactive stream" stream=1 Jan 20 02:34:35.479372 containerd[1593]: time="2026-01-20T02:34:35.473382363Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:34:35.479372 containerd[1593]: time="2026-01-20T02:34:35.473795618Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 02:34:38.186213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:34:38.404759 (kubelet)[3064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:34:39.374150 containerd[1593]: time="2026-01-20T02:34:39.367170414Z" level=error msg="failed to drain init process 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Jan 20 02:34:39.378028 containerd[1593]: time="2026-01-20T02:34:39.376028633Z" level=warning msg="error copying stdout" runtime=io.containerd.runc.v2 Jan 20 02:34:39.378028 containerd[1593]: time="2026-01-20T02:34:39.376091999Z" level=warning msg="error copying stderr" runtime=io.containerd.runc.v2 Jan 20 02:34:39.460292 containerd[1593]: time="2026-01-20T02:34:39.445259468Z" level=warning msg="failed to cleanup rootfs mount" error="no such file or directory" runtime=io.containerd.runc.v2 Jan 20 02:34:39.463310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3-rootfs.mount: Deactivated successfully. Jan 20 02:34:39.578169 containerd[1593]: time="2026-01-20T02:34:39.570134123Z" level=error msg="StartContainer for \"88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3\" failed" error="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: context canceled" Jan 20 02:34:39.946137 kubelet[3064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:34:39.946137 kubelet[3064]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:34:39.946137 kubelet[3064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:34:39.963130 kubelet[3064]: I0120 02:34:39.949223 3064 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:34:40.105032 kubelet[3064]: I0120 02:34:40.104986 3064 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:34:40.105255 kubelet[3064]: I0120 02:34:40.105234 3064 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:34:40.114778 kubelet[3064]: I0120 02:34:40.114733 3064 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:34:40.187775 kubelet[3064]: I0120 02:34:40.184044 3064 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 02:34:40.348061 kubelet[3064]: I0120 02:34:40.348007 3064 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:34:40.572107 kubelet[3064]: I0120 02:34:40.565139 3064 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:34:41.085268 kubelet[3064]: I0120 02:34:41.084963 3064 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:34:41.106318 kubelet[3064]: I0120 02:34:41.099926 3064 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:34:41.106318 kubelet[3064]: I0120 02:34:41.100174 3064 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:34:41.106318 kubelet[3064]: I0120 02:34:41.104106 3064 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:34:41.106318 kubelet[3064]: I0120 02:34:41.104127 3064 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:34:41.114102 kubelet[3064]: I0120 02:34:41.106904 3064 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:34:41.121309 kubelet[3064]: I0120 02:34:41.118200 3064 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:34:41.121309 kubelet[3064]: I0120 02:34:41.118265 3064 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:34:41.124040 kubelet[3064]: I0120 02:34:41.124012 3064 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:34:41.124150 kubelet[3064]: I0120 02:34:41.124133 3064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:34:41.206299 kubelet[3064]: I0120 02:34:41.203204 3064 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:34:41.395757 kubelet[3064]: I0120 02:34:41.389981 3064 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:34:41.406017 kubelet[3064]: I0120 02:34:41.405984 3064 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:34:41.406321 kubelet[3064]: I0120 02:34:41.406301 3064 server.go:1287] "Started kubelet" Jan 20 02:34:41.435764 kubelet[3064]: I0120 02:34:41.435699 3064 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:34:41.462067 kubelet[3064]: I0120 02:34:41.462028 3064 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:34:41.480253 kubelet[3064]: I0120 02:34:41.466311 3064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:34:41.494302 kubelet[3064]: I0120 02:34:41.494227 3064 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:34:41.607813 kubelet[3064]: I0120 02:34:41.607769 3064 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:34:41.692023 kubelet[3064]: I0120 02:34:41.502159 3064 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:34:41.726102 kubelet[3064]: I0120 02:34:41.724268 3064 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:34:41.742315 kubelet[3064]: I0120 02:34:41.742276 3064 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:34:41.744104 kubelet[3064]: I0120 02:34:41.691304 3064 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:34:41.744216 kubelet[3064]: I0120 02:34:41.742973 3064 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:34:41.756077 kubelet[3064]: I0120 02:34:41.623313 3064 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:34:41.777100 kubelet[3064]: I0120 02:34:41.686625 3064 scope.go:117] "RemoveContainer" containerID="8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3" Jan 20 02:34:42.031773 kubelet[3064]: I0120 02:34:42.031337 3064 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:34:42.092306 kubelet[3064]: E0120 02:34:42.086053 3064 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:34:42.113383 containerd[1593]: time="2026-01-20T02:34:42.113032566Z" level=info msg="RemoveContainer for \"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\"" Jan 20 02:34:42.162188 kubelet[3064]: I0120 02:34:42.162106 3064 apiserver.go:52] "Watching apiserver" Jan 20 02:34:42.187860 containerd[1593]: time="2026-01-20T02:34:42.187314534Z" level=info msg="RemoveContainer for \"8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3\" returns successfully" Jan 20 02:34:42.273628 kubelet[3064]: I0120 02:34:42.268809 3064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:34:42.299177 kubelet[3064]: I0120 02:34:42.299061 3064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:34:42.314763 kubelet[3064]: I0120 02:34:42.307753 3064 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:34:42.314763 kubelet[3064]: I0120 02:34:42.307886 3064 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:34:42.314763 kubelet[3064]: I0120 02:34:42.307903 3064 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:34:42.314763 kubelet[3064]: E0120 02:34:42.307990 3064 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:34:42.411095 kubelet[3064]: E0120 02:34:42.410825 3064 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:34:42.619375 kubelet[3064]: E0120 02:34:42.616822 3064 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:34:43.032691 kubelet[3064]: E0120 02:34:43.028989 3064 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:34:43.829880 kubelet[3064]: E0120 02:34:43.829826 3064 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:34:45.472254 kubelet[3064]: E0120 02:34:45.454059 3064 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:34:45.981910 kubelet[3064]: E0120 02:34:45.977743 3064 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice/cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope: task 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 not found: not found Jan 20 02:34:48.674203 kubelet[3064]: E0120 02:34:48.673937 3064 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:34:51.326958 kubelet[3064]: E0120 02:34:51.315949 3064 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice/cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope: task 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 not found: not found Jan 20 02:34:51.457656 kubelet[3064]: I0120 02:34:51.448826 3064 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:34:51.457656 kubelet[3064]: I0120 02:34:51.449024 3064 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:34:51.457656 kubelet[3064]: I0120 02:34:51.449073 3064 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:34:51.463141 kubelet[3064]: I0120 02:34:51.460094 3064 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:34:51.463141 kubelet[3064]: I0120 02:34:51.460117 3064 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:34:51.463141 kubelet[3064]: I0120 02:34:51.460326 3064 policy_none.go:49] "None policy: Start" Jan 20 02:34:51.463141 kubelet[3064]: I0120 02:34:51.460345 3064 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:34:51.463141 kubelet[3064]: I0120 02:34:51.460367 3064 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:34:51.463141 kubelet[3064]: I0120 02:34:51.460819 3064 state_mem.go:75] "Updated machine memory state" Jan 20 02:34:51.615697 kubelet[3064]: I0120 02:34:51.612902 3064 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:34:51.615697 kubelet[3064]: I0120 02:34:51.613840 3064 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:34:51.615697 kubelet[3064]: I0120 02:34:51.613871 3064 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:34:51.633039 kubelet[3064]: I0120 02:34:51.631871 3064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:34:51.795674 kubelet[3064]: E0120 02:34:51.795119 3064 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:34:51.982067 kubelet[3064]: I0120 02:34:51.981716 3064 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:34:52.492658 kubelet[3064]: I0120 02:34:52.487698 3064 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:34:52.502535 kubelet[3064]: I0120 02:34:52.497920 3064 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:34:53.707993 kubelet[3064]: I0120 02:34:53.705645 3064 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:34:53.756913 kubelet[3064]: I0120 02:34:53.749048 3064 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:34:53.807289 kubelet[3064]: I0120 02:34:53.798275 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:34:53.807289 kubelet[3064]: I0120 02:34:53.798559 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:34:53.807289 kubelet[3064]: I0120 02:34:53.798619 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:34:53.807289 kubelet[3064]: I0120 02:34:53.798644 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:34:53.807289 kubelet[3064]: I0120 02:34:53.798679 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d428743870a32f6b993b2ebfdd5780e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d428743870a32f6b993b2ebfdd5780e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:34:53.812949 kubelet[3064]: I0120 02:34:53.798702 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d428743870a32f6b993b2ebfdd5780e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d428743870a32f6b993b2ebfdd5780e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:34:53.812949 kubelet[3064]: I0120 02:34:53.798727 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:34:53.812949 kubelet[3064]: I0120 02:34:53.798749 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:34:53.812949 kubelet[3064]: I0120 02:34:53.798770 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d428743870a32f6b993b2ebfdd5780e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d428743870a32f6b993b2ebfdd5780e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:34:54.117644 kubelet[3064]: I0120 02:34:54.026247 3064 scope.go:117] "RemoveContainer" containerID="88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3" Jan 20 02:34:54.434706 kubelet[3064]: E0120 02:34:54.407385 3064 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 02:34:54.583807 containerd[1593]: time="2026-01-20T02:34:54.571233835Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 20 02:34:55.525768 kubelet[3064]: W0120 02:34:55.511134 3064 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice/cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope WatchSource:0}: task 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 not found: not found Jan 20 02:34:55.977811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124060574.mount: Deactivated successfully. Jan 20 02:34:56.776266 containerd[1593]: time="2026-01-20T02:34:56.745688630Z" level=info msg="Container fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:34:57.717656 containerd[1593]: time="2026-01-20T02:34:57.712698776Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5\"" Jan 20 02:34:57.733692 containerd[1593]: time="2026-01-20T02:34:57.733266390Z" level=info msg="StartContainer for \"fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5\"" Jan 20 02:34:57.897116 containerd[1593]: time="2026-01-20T02:34:57.896877710Z" level=info msg="connecting to shim fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5" address="unix:///run/containerd/s/98f14c115b348dcb074877b683a3cedb9d01dbbe6f5f5a9daeb8c0026a4ef212" protocol=ttrpc version=3 Jan 20 02:34:59.340798 systemd[1]: Started cri-containerd-fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5.scope - libcontainer container fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5. Jan 20 02:35:00.781736 containerd[1593]: time="2026-01-20T02:35:00.781652593Z" level=info msg="StartContainer for \"fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5\" returns successfully" Jan 20 02:35:07.878143 sudo[1803]: pam_unix(sudo:session): session closed for user root Jan 20 02:35:07.997851 sshd[1802]: Connection closed by 10.0.0.1 port 38412 Jan 20 02:35:08.017639 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:08.113191 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:38412.service: Deactivated successfully. Jan 20 02:35:08.222305 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:35:08.245064 systemd[1]: session-9.scope: Consumed 21.238s CPU time, 238.3M memory peak. Jan 20 02:35:08.556343 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:35:08.603345 systemd-logind[1567]: Removed session 9. Jan 20 02:35:37.686634 kubelet[3064]: I0120 02:35:37.685035 3064 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:35:37.781270 containerd[1593]: time="2026-01-20T02:35:37.775075917Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:35:37.791858 kubelet[3064]: I0120 02:35:37.786293 3064 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:35:40.151216 kubelet[3064]: I0120 02:35:40.150794 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25743284-d3aa-46da-9c04-cda43fadd513-kube-proxy\") pod \"kube-proxy-75vsk\" (UID: \"25743284-d3aa-46da-9c04-cda43fadd513\") " pod="kube-system/kube-proxy-75vsk" Jan 20 02:35:40.151216 kubelet[3064]: I0120 02:35:40.150954 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25743284-d3aa-46da-9c04-cda43fadd513-xtables-lock\") pod \"kube-proxy-75vsk\" (UID: \"25743284-d3aa-46da-9c04-cda43fadd513\") " pod="kube-system/kube-proxy-75vsk" Jan 20 02:35:40.151216 kubelet[3064]: I0120 02:35:40.150981 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25743284-d3aa-46da-9c04-cda43fadd513-lib-modules\") pod \"kube-proxy-75vsk\" (UID: \"25743284-d3aa-46da-9c04-cda43fadd513\") " pod="kube-system/kube-proxy-75vsk" Jan 20 02:35:40.151216 kubelet[3064]: I0120 02:35:40.151013 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpsz\" (UniqueName: \"kubernetes.io/projected/25743284-d3aa-46da-9c04-cda43fadd513-kube-api-access-5lpsz\") pod \"kube-proxy-75vsk\" (UID: \"25743284-d3aa-46da-9c04-cda43fadd513\") " pod="kube-system/kube-proxy-75vsk" Jan 20 02:35:40.382308 kubelet[3064]: I0120 02:35:40.285071 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9485f7be-92c0-4ab6-9307-e0e24e062643-run\") pod \"kube-flannel-ds-6rs97\" (UID: \"9485f7be-92c0-4ab6-9307-e0e24e062643\") " pod="kube-flannel/kube-flannel-ds-6rs97" Jan 20 02:35:40.390707 systemd[1]: Created slice kubepods-besteffort-pod25743284_d3aa_46da_9c04_cda43fadd513.slice - libcontainer container kubepods-besteffort-pod25743284_d3aa_46da_9c04_cda43fadd513.slice. Jan 20 02:35:40.420677 kubelet[3064]: I0120 02:35:40.420322 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9485f7be-92c0-4ab6-9307-e0e24e062643-cni-plugin\") pod \"kube-flannel-ds-6rs97\" (UID: \"9485f7be-92c0-4ab6-9307-e0e24e062643\") " pod="kube-flannel/kube-flannel-ds-6rs97" Jan 20 02:35:40.564048 kubelet[3064]: W0120 02:35:40.496078 3064 reflector.go:569] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'localhost' and this object Jan 20 02:35:40.564653 kubelet[3064]: E0120 02:35:40.564609 3064 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 02:35:40.564784 kubelet[3064]: W0120 02:35:40.505709 3064 reflector.go:569] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'localhost' and this object Jan 20 02:35:40.564902 kubelet[3064]: E0120 02:35:40.564876 3064 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 02:35:40.578234 kubelet[3064]: I0120 02:35:40.570059 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9485f7be-92c0-4ab6-9307-e0e24e062643-cni\") pod \"kube-flannel-ds-6rs97\" (UID: \"9485f7be-92c0-4ab6-9307-e0e24e062643\") " pod="kube-flannel/kube-flannel-ds-6rs97" Jan 20 02:35:40.578234 kubelet[3064]: I0120 02:35:40.570243 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9485f7be-92c0-4ab6-9307-e0e24e062643-flannel-cfg\") pod \"kube-flannel-ds-6rs97\" (UID: \"9485f7be-92c0-4ab6-9307-e0e24e062643\") " pod="kube-flannel/kube-flannel-ds-6rs97" Jan 20 02:35:40.578234 kubelet[3064]: I0120 02:35:40.570277 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2m5\" (UniqueName: \"kubernetes.io/projected/9485f7be-92c0-4ab6-9307-e0e24e062643-kube-api-access-kl2m5\") pod \"kube-flannel-ds-6rs97\" (UID: \"9485f7be-92c0-4ab6-9307-e0e24e062643\") " pod="kube-flannel/kube-flannel-ds-6rs97" Jan 20 02:35:40.578234 kubelet[3064]: I0120 02:35:40.570352 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9485f7be-92c0-4ab6-9307-e0e24e062643-xtables-lock\") pod \"kube-flannel-ds-6rs97\" (UID: \"9485f7be-92c0-4ab6-9307-e0e24e062643\") " pod="kube-flannel/kube-flannel-ds-6rs97" Jan 20 02:35:40.833689 systemd[1]: Created slice kubepods-burstable-pod9485f7be_92c0_4ab6_9307_e0e24e062643.slice - libcontainer container kubepods-burstable-pod9485f7be_92c0_4ab6_9307_e0e24e062643.slice. Jan 20 02:35:41.341045 containerd[1593]: time="2026-01-20T02:35:41.333388416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75vsk,Uid:25743284-d3aa-46da-9c04-cda43fadd513,Namespace:kube-system,Attempt:0,}" Jan 20 02:35:41.681808 containerd[1593]: time="2026-01-20T02:35:41.680327348Z" level=info msg="connecting to shim a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de" address="unix:///run/containerd/s/abed0357823b0ea74d2d8696cd918bc97a68d8ecd30af65009e5b9ef1b3b8be6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:35:41.902700 kubelet[3064]: E0120 02:35:41.898706 3064 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 02:35:41.902700 kubelet[3064]: E0120 02:35:41.898866 3064 projected.go:194] Error preparing data for projected volume kube-api-access-kl2m5 for pod kube-flannel/kube-flannel-ds-6rs97: failed to sync configmap cache: timed out waiting for the condition Jan 20 02:35:41.921047 kubelet[3064]: E0120 02:35:41.912219 3064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9485f7be-92c0-4ab6-9307-e0e24e062643-kube-api-access-kl2m5 podName:9485f7be-92c0-4ab6-9307-e0e24e062643 nodeName:}" failed. No retries permitted until 2026-01-20 02:35:42.411992043 +0000 UTC m=+63.819071823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kl2m5" (UniqueName: "kubernetes.io/projected/9485f7be-92c0-4ab6-9307-e0e24e062643-kube-api-access-kl2m5") pod "kube-flannel-ds-6rs97" (UID: "9485f7be-92c0-4ab6-9307-e0e24e062643") : failed to sync configmap cache: timed out waiting for the condition Jan 20 02:35:42.714969 containerd[1593]: time="2026-01-20T02:35:42.714862085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6rs97,Uid:9485f7be-92c0-4ab6-9307-e0e24e062643,Namespace:kube-flannel,Attempt:0,}" Jan 20 02:35:42.914872 systemd[1]: Started cri-containerd-a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de.scope - libcontainer container a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de. Jan 20 02:35:44.148236 containerd[1593]: time="2026-01-20T02:35:44.141725083Z" level=info msg="connecting to shim fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882" address="unix:///run/containerd/s/92016412aaa5eb95ce75426c48ec42e20cf696fcc82bb6969bc1b22b0449df3a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:35:45.321593 containerd[1593]: time="2026-01-20T02:35:45.262936243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75vsk,Uid:25743284-d3aa-46da-9c04-cda43fadd513,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de\"" Jan 20 02:35:47.834671 kubelet[3064]: E0120 02:35:47.834256 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.52s" Jan 20 02:35:48.598254 containerd[1593]: time="2026-01-20T02:35:48.589716386Z" level=info msg="CreateContainer within sandbox \"a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:35:48.965832 systemd[1]: Started cri-containerd-fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882.scope - libcontainer container fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882. Jan 20 02:35:50.154853 kubelet[3064]: E0120 02:35:50.041913 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.632s" Jan 20 02:35:51.477185 containerd[1593]: time="2026-01-20T02:35:51.461798347Z" level=info msg="Container 369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:51.894174 containerd[1593]: time="2026-01-20T02:35:51.883678616Z" level=info msg="CreateContainer within sandbox \"a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e\"" Jan 20 02:35:51.987726 containerd[1593]: time="2026-01-20T02:35:51.987662591Z" level=info msg="StartContainer for \"369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e\"" Jan 20 02:35:52.057682 containerd[1593]: time="2026-01-20T02:35:52.057618687Z" level=info msg="connecting to shim 369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e" address="unix:///run/containerd/s/abed0357823b0ea74d2d8696cd918bc97a68d8ecd30af65009e5b9ef1b3b8be6" protocol=ttrpc version=3 Jan 20 02:35:52.361261 containerd[1593]: time="2026-01-20T02:35:52.350314150Z" level=error msg="get state for fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882" error="context deadline exceeded" Jan 20 02:35:52.361261 containerd[1593]: time="2026-01-20T02:35:52.354194576Z" level=warning msg="unknown status" status=0 Jan 20 02:35:53.423101 systemd[1]: Started cri-containerd-369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e.scope - libcontainer container 369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e. Jan 20 02:35:53.757342 containerd[1593]: time="2026-01-20T02:35:53.734688721Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:35:56.804315 containerd[1593]: time="2026-01-20T02:35:56.788349320Z" level=error msg="get state for 369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e" error="context deadline exceeded" Jan 20 02:35:56.940105 containerd[1593]: time="2026-01-20T02:35:56.940036351Z" level=warning msg="unknown status" status=0 Jan 20 02:35:57.441722 containerd[1593]: time="2026-01-20T02:35:57.434810259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6rs97,Uid:9485f7be-92c0-4ab6-9307-e0e24e062643,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\"" Jan 20 02:35:57.824215 containerd[1593]: time="2026-01-20T02:35:57.819831784Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 20 02:35:58.214074 containerd[1593]: time="2026-01-20T02:35:58.195996329Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:35:59.619723 containerd[1593]: time="2026-01-20T02:35:59.600304357Z" level=info msg="StartContainer for \"369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e\" returns successfully" Jan 20 02:35:59.908358 kubelet[3064]: I0120 02:35:59.896067 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75vsk" podStartSLOduration=21.896040452 podStartE2EDuration="21.896040452s" podCreationTimestamp="2026-01-20 02:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:35:59.894343663 +0000 UTC m=+81.301423471" watchObservedRunningTime="2026-01-20 02:35:59.896040452 +0000 UTC m=+81.303120232" Jan 20 02:36:01.881051 kubelet[3064]: E0120 02:36:01.881000 3064 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice/cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope: task 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 not found: not found Jan 20 02:36:09.287336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1137019357.mount: Deactivated successfully. Jan 20 02:36:12.876223 containerd[1593]: time="2026-01-20T02:36:12.786209749Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:12.997597 containerd[1593]: time="2026-01-20T02:36:12.984053971Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 20 02:36:13.064170 containerd[1593]: time="2026-01-20T02:36:13.042311010Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:13.295386 containerd[1593]: time="2026-01-20T02:36:13.288705638Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:13.391610 containerd[1593]: time="2026-01-20T02:36:13.315897905Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 15.478283487s" Jan 20 02:36:13.391610 containerd[1593]: time="2026-01-20T02:36:13.315967232Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 20 02:36:13.918818 containerd[1593]: time="2026-01-20T02:36:13.915848898Z" level=info msg="CreateContainer within sandbox \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 02:36:14.412526 containerd[1593]: time="2026-01-20T02:36:14.409264322Z" level=info msg="Container bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:36:14.542126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446800501.mount: Deactivated successfully. Jan 20 02:36:14.826366 containerd[1593]: time="2026-01-20T02:36:14.826205628Z" level=info msg="CreateContainer within sandbox \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3\"" Jan 20 02:36:14.836246 containerd[1593]: time="2026-01-20T02:36:14.836205519Z" level=info msg="StartContainer for \"bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3\"" Jan 20 02:36:14.980166 containerd[1593]: time="2026-01-20T02:36:14.979384484Z" level=info msg="connecting to shim bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3" address="unix:///run/containerd/s/92016412aaa5eb95ce75426c48ec42e20cf696fcc82bb6969bc1b22b0449df3a" protocol=ttrpc version=3 Jan 20 02:36:16.414923 systemd[1]: Started cri-containerd-bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3.scope - libcontainer container bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3. Jan 20 02:36:17.668990 systemd[1]: cri-containerd-bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3.scope: Deactivated successfully. Jan 20 02:36:17.818869 containerd[1593]: time="2026-01-20T02:36:17.818807238Z" level=info msg="received container exit event container_id:\"bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3\" id:\"bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3\" pid:3392 exited_at:{seconds:1768876577 nanos:806679838}" Jan 20 02:36:17.842087 containerd[1593]: time="2026-01-20T02:36:17.839397059Z" level=info msg="StartContainer for \"bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3\" returns successfully" Jan 20 02:36:18.785055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3-rootfs.mount: Deactivated successfully. Jan 20 02:36:19.335118 containerd[1593]: time="2026-01-20T02:36:19.335070746Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 20 02:36:25.098125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913262622.mount: Deactivated successfully. Jan 20 02:36:37.786802 kubelet[3064]: E0120 02:36:37.770198 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.255s" Jan 20 02:36:40.107249 kubelet[3064]: E0120 02:36:40.091807 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.321s" Jan 20 02:36:41.699126 kubelet[3064]: E0120 02:36:41.699021 3064 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 20 02:36:42.793843 kubelet[3064]: E0120 02:36:42.793747 3064 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:36:47.817190 kubelet[3064]: E0120 02:36:47.809842 3064 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:36:58.586606 kubelet[3064]: E0120 02:36:58.561127 3064 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:36:58.900694 kubelet[3064]: E0120 02:36:58.898580 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.587s" Jan 20 02:37:00.699957 containerd[1593]: time="2026-01-20T02:37:00.698902510Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:37:00.724789 containerd[1593]: time="2026-01-20T02:37:00.722751382Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 20 02:37:00.738342 containerd[1593]: time="2026-01-20T02:37:00.737730991Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:37:01.109303 containerd[1593]: time="2026-01-20T02:37:01.109095260Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:37:01.199086 containerd[1593]: time="2026-01-20T02:37:01.198616518Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 41.862538652s" Jan 20 02:37:01.199086 containerd[1593]: time="2026-01-20T02:37:01.198679976Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 20 02:37:01.321106 containerd[1593]: time="2026-01-20T02:37:01.321049138Z" level=info msg="CreateContainer within sandbox \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 02:37:01.680941 kubelet[3064]: E0120 02:37:01.680898 3064 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice/cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope: task 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 not found: not found Jan 20 02:37:01.732561 containerd[1593]: time="2026-01-20T02:37:01.726791986Z" level=info msg="Container d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:01.788067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113605989.mount: Deactivated successfully. Jan 20 02:37:01.895710 containerd[1593]: time="2026-01-20T02:37:01.895661098Z" level=info msg="CreateContainer within sandbox \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a\"" Jan 20 02:37:01.917662 containerd[1593]: time="2026-01-20T02:37:01.917374638Z" level=info msg="StartContainer for \"d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a\"" Jan 20 02:37:01.953904 containerd[1593]: time="2026-01-20T02:37:01.953297135Z" level=info msg="connecting to shim d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a" address="unix:///run/containerd/s/92016412aaa5eb95ce75426c48ec42e20cf696fcc82bb6969bc1b22b0449df3a" protocol=ttrpc version=3 Jan 20 02:37:02.775379 systemd[1]: Started cri-containerd-d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a.scope - libcontainer container d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a. Jan 20 02:37:03.590127 kubelet[3064]: E0120 02:37:03.590073 3064 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:37:03.768889 systemd[1]: cri-containerd-d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a.scope: Deactivated successfully. Jan 20 02:37:03.804337 containerd[1593]: time="2026-01-20T02:37:03.804088680Z" level=info msg="received container exit event container_id:\"d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a\" id:\"d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a\" pid:3532 exited_at:{seconds:1768876623 nanos:781987281}" Jan 20 02:37:03.814042 containerd[1593]: time="2026-01-20T02:37:03.814000925Z" level=info msg="StartContainer for \"d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a\" returns successfully" Jan 20 02:37:04.640741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a-rootfs.mount: Deactivated successfully. Jan 20 02:37:06.738053 containerd[1593]: time="2026-01-20T02:37:06.737992264Z" level=info msg="CreateContainer within sandbox \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 02:37:07.301916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567689572.mount: Deactivated successfully. Jan 20 02:37:07.340747 containerd[1593]: time="2026-01-20T02:37:07.340208494Z" level=info msg="Container e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:07.603750 containerd[1593]: time="2026-01-20T02:37:07.594279571Z" level=info msg="CreateContainer within sandbox \"fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f\"" Jan 20 02:37:07.628384 containerd[1593]: time="2026-01-20T02:37:07.623064718Z" level=info msg="StartContainer for \"e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f\"" Jan 20 02:37:07.689796 containerd[1593]: time="2026-01-20T02:37:07.672278225Z" level=info msg="connecting to shim e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f" address="unix:///run/containerd/s/92016412aaa5eb95ce75426c48ec42e20cf696fcc82bb6969bc1b22b0449df3a" protocol=ttrpc version=3 Jan 20 02:37:08.196776 systemd[1]: Started cri-containerd-e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f.scope - libcontainer container e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f. Jan 20 02:37:09.270393 containerd[1593]: time="2026-01-20T02:37:09.251324874Z" level=info msg="StartContainer for \"e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f\" returns successfully" Jan 20 02:37:11.158591 kubelet[3064]: I0120 02:37:11.153681 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6rs97" podStartSLOduration=29.533833939 podStartE2EDuration="1m33.153653625s" podCreationTimestamp="2026-01-20 02:35:38 +0000 UTC" firstStartedPulling="2026-01-20 02:35:57.664163028 +0000 UTC m=+79.071242807" lastFinishedPulling="2026-01-20 02:37:01.283982725 +0000 UTC m=+142.691062493" observedRunningTime="2026-01-20 02:37:10.189786709 +0000 UTC m=+151.596866508" watchObservedRunningTime="2026-01-20 02:37:11.153653625 +0000 UTC m=+152.560733403" Jan 20 02:37:11.226701 kubelet[3064]: I0120 02:37:11.224282 3064 status_manager.go:890] "Failed to get status for pod" podUID="83dccab3-cd88-4bc5-b392-39fb4366881b" pod="kube-system/coredns-668d6bf9bc-2hpvp" err="pods \"coredns-668d6bf9bc-2hpvp\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 20 02:37:11.245634 kubelet[3064]: W0120 02:37:11.236200 3064 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 20 02:37:11.245634 kubelet[3064]: E0120 02:37:11.236366 3064 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 02:37:11.263703 kubelet[3064]: I0120 02:37:11.263665 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83dccab3-cd88-4bc5-b392-39fb4366881b-config-volume\") pod \"coredns-668d6bf9bc-2hpvp\" (UID: \"83dccab3-cd88-4bc5-b392-39fb4366881b\") " pod="kube-system/coredns-668d6bf9bc-2hpvp" Jan 20 02:37:11.273206 kubelet[3064]: I0120 02:37:11.264396 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc1c543-c636-462a-bf34-bcc6ee8e350e-config-volume\") pod \"coredns-668d6bf9bc-ljpvx\" (UID: \"1fc1c543-c636-462a-bf34-bcc6ee8e350e\") " pod="kube-system/coredns-668d6bf9bc-ljpvx" Jan 20 02:37:11.320215 kubelet[3064]: I0120 02:37:11.314386 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhh5z\" (UniqueName: \"kubernetes.io/projected/1fc1c543-c636-462a-bf34-bcc6ee8e350e-kube-api-access-dhh5z\") pod \"coredns-668d6bf9bc-ljpvx\" (UID: \"1fc1c543-c636-462a-bf34-bcc6ee8e350e\") " pod="kube-system/coredns-668d6bf9bc-ljpvx" Jan 20 02:37:11.320215 kubelet[3064]: I0120 02:37:11.315880 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjp5j\" (UniqueName: \"kubernetes.io/projected/83dccab3-cd88-4bc5-b392-39fb4366881b-kube-api-access-sjp5j\") pod \"coredns-668d6bf9bc-2hpvp\" (UID: \"83dccab3-cd88-4bc5-b392-39fb4366881b\") " pod="kube-system/coredns-668d6bf9bc-2hpvp" Jan 20 02:37:11.333636 systemd[1]: Created slice kubepods-burstable-pod83dccab3_cd88_4bc5_b392_39fb4366881b.slice - libcontainer container kubepods-burstable-pod83dccab3_cd88_4bc5_b392_39fb4366881b.slice. Jan 20 02:37:11.387725 systemd[1]: Created slice kubepods-burstable-pod1fc1c543_c636_462a_bf34_bcc6ee8e350e.slice - libcontainer container kubepods-burstable-pod1fc1c543_c636_462a_bf34_bcc6ee8e350e.slice. Jan 20 02:37:12.415391 systemd-networkd[1509]: flannel.1: Link UP Jan 20 02:37:12.415636 systemd-networkd[1509]: flannel.1: Gained carrier Jan 20 02:37:12.591966 containerd[1593]: time="2026-01-20T02:37:12.591780640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hpvp,Uid:83dccab3-cd88-4bc5-b392-39fb4366881b,Namespace:kube-system,Attempt:0,}" Jan 20 02:37:12.658793 containerd[1593]: time="2026-01-20T02:37:12.654578108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ljpvx,Uid:1fc1c543-c636-462a-bf34-bcc6ee8e350e,Namespace:kube-system,Attempt:0,}" Jan 20 02:37:14.230799 systemd-networkd[1509]: flannel.1: Gained IPv6LL Jan 20 02:37:14.704801 containerd[1593]: time="2026-01-20T02:37:14.626858991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hpvp,Uid:83dccab3-cd88-4bc5-b392-39fb4366881b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"960eeb7ca44c621e2883746891e3512d46b401b6123fa5501c0887ca7a79564c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:37:14.681901 systemd[1]: run-netns-cni\x2d7d89868a\x2da3f4\x2dd6c7\x2dbe52\x2d8b644932c1a8.mount: Deactivated successfully. Jan 20 02:37:14.806803 kubelet[3064]: E0120 02:37:14.715962 3064 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960eeb7ca44c621e2883746891e3512d46b401b6123fa5501c0887ca7a79564c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:37:14.806803 kubelet[3064]: E0120 02:37:14.753715 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960eeb7ca44c621e2883746891e3512d46b401b6123fa5501c0887ca7a79564c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-2hpvp" Jan 20 02:37:14.806803 kubelet[3064]: E0120 02:37:14.753874 3064 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960eeb7ca44c621e2883746891e3512d46b401b6123fa5501c0887ca7a79564c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-2hpvp" Jan 20 02:37:14.806803 kubelet[3064]: E0120 02:37:14.760728 3064 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2hpvp_kube-system(83dccab3-cd88-4bc5-b392-39fb4366881b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2hpvp_kube-system(83dccab3-cd88-4bc5-b392-39fb4366881b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"960eeb7ca44c621e2883746891e3512d46b401b6123fa5501c0887ca7a79564c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-2hpvp" podUID="83dccab3-cd88-4bc5-b392-39fb4366881b" Jan 20 02:37:15.130353 containerd[1593]: time="2026-01-20T02:37:15.126326858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ljpvx,Uid:1fc1c543-c636-462a-bf34-bcc6ee8e350e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f07c1df71c3802d3b71dd1170b007637ed8a86e0cbb8e02e3c90984fb1ac47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:37:15.127868 systemd[1]: run-netns-cni\x2d8092fc4c\x2d0158\x2df770\x2d5026\x2d1193ea69452b.mount: Deactivated successfully. Jan 20 02:37:15.138729 kubelet[3064]: E0120 02:37:15.136744 3064 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f07c1df71c3802d3b71dd1170b007637ed8a86e0cbb8e02e3c90984fb1ac47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:37:15.151375 kubelet[3064]: E0120 02:37:15.139023 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f07c1df71c3802d3b71dd1170b007637ed8a86e0cbb8e02e3c90984fb1ac47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-ljpvx" Jan 20 02:37:15.151801 kubelet[3064]: E0120 02:37:15.151732 3064 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f07c1df71c3802d3b71dd1170b007637ed8a86e0cbb8e02e3c90984fb1ac47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-ljpvx" Jan 20 02:37:15.191528 kubelet[3064]: E0120 02:37:15.190993 3064 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ljpvx_kube-system(1fc1c543-c636-462a-bf34-bcc6ee8e350e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ljpvx_kube-system(1fc1c543-c636-462a-bf34-bcc6ee8e350e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3f07c1df71c3802d3b71dd1170b007637ed8a86e0cbb8e02e3c90984fb1ac47\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-ljpvx" podUID="1fc1c543-c636-462a-bf34-bcc6ee8e350e" Jan 20 02:37:28.331622 containerd[1593]: time="2026-01-20T02:37:28.330924369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hpvp,Uid:83dccab3-cd88-4bc5-b392-39fb4366881b,Namespace:kube-system,Attempt:0,}" Jan 20 02:37:29.277938 systemd-networkd[1509]: cni0: Link UP Jan 20 02:37:29.278069 systemd-networkd[1509]: cni0: Gained carrier Jan 20 02:37:29.364613 systemd-networkd[1509]: cni0: Lost carrier Jan 20 02:37:29.889680 systemd-networkd[1509]: veth42b57c1f: Link UP Jan 20 02:37:30.064313 kernel: cni0: port 1(veth42b57c1f) entered blocking state Jan 20 02:37:30.064664 kernel: cni0: port 1(veth42b57c1f) entered disabled state Jan 20 02:37:30.155600 kernel: veth42b57c1f: entered allmulticast mode Jan 20 02:37:30.202807 kernel: veth42b57c1f: entered promiscuous mode Jan 20 02:37:30.349291 containerd[1593]: time="2026-01-20T02:37:30.348237585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ljpvx,Uid:1fc1c543-c636-462a-bf34-bcc6ee8e350e,Namespace:kube-system,Attempt:0,}" Jan 20 02:37:31.007733 kernel: cni0: port 1(veth42b57c1f) entered blocking state Jan 20 02:37:31.007861 kernel: cni0: port 1(veth42b57c1f) entered forwarding state Jan 20 02:37:31.008285 systemd-networkd[1509]: veth42b57c1f: Gained carrier Jan 20 02:37:31.023187 systemd-networkd[1509]: cni0: Gained carrier Jan 20 02:37:31.226719 systemd-networkd[1509]: veth982f6b2e: Link UP Jan 20 02:37:31.269798 systemd-networkd[1509]: cni0: Gained IPv6LL Jan 20 02:37:31.329268 kernel: cni0: port 2(veth982f6b2e) entered blocking state Jan 20 02:37:31.329389 kernel: cni0: port 2(veth982f6b2e) entered disabled state Jan 20 02:37:31.329691 kernel: veth982f6b2e: entered allmulticast mode Jan 20 02:37:31.329728 kernel: veth982f6b2e: entered promiscuous mode Jan 20 02:37:32.027094 kernel: cni0: port 2(veth982f6b2e) entered blocking state Jan 20 02:37:32.027220 kernel: cni0: port 2(veth982f6b2e) entered forwarding state Jan 20 02:37:32.029282 systemd-networkd[1509]: veth982f6b2e: Gained carrier Jan 20 02:37:32.047725 containerd[1593]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Jan 20 02:37:32.047725 containerd[1593]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:37:32.097763 containerd[1593]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 20 02:37:32.097763 containerd[1593]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Jan 20 02:37:32.097763 containerd[1593]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:37:32.895785 systemd-networkd[1509]: veth42b57c1f: Gained IPv6LL Jan 20 02:37:33.105768 systemd-networkd[1509]: veth982f6b2e: Gained IPv6LL Jan 20 02:37:33.609061 containerd[1593]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:37:33.604389286Z" level=info msg="connecting to shim e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04" address="unix:///run/containerd/s/7e1a7b95abc308f2213fe4063561e2413f19003b463271daa2274f40aef87977" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:37:34.515731 containerd[1593]: time="2026-01-20T02:37:34.515661126Z" level=info msg="connecting to shim f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1" address="unix:///run/containerd/s/dce0d1dbb78ea5a3adf728d5acc804f82da4571d8e2610c26c594a7e325b1c71" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:37:35.582802 systemd[1]: Started cri-containerd-e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04.scope - libcontainer container e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04. Jan 20 02:37:36.859870 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:37:36.953710 systemd[1]: Started cri-containerd-f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1.scope - libcontainer container f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1. Jan 20 02:37:37.165611 containerd[1593]: time="2026-01-20T02:37:37.123338247Z" level=error msg="get state for e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04" error="context deadline exceeded" Jan 20 02:37:37.165611 containerd[1593]: time="2026-01-20T02:37:37.149897298Z" level=warning msg="unknown status" status=0 Jan 20 02:37:37.886803 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:37:38.083710 containerd[1593]: time="2026-01-20T02:37:38.083625242Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:37:39.019261 containerd[1593]: time="2026-01-20T02:37:39.019213237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hpvp,Uid:83dccab3-cd88-4bc5-b392-39fb4366881b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04\"" Jan 20 02:37:39.491611 containerd[1593]: time="2026-01-20T02:37:39.491309176Z" level=info msg="CreateContainer within sandbox \"e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:37:39.501609 containerd[1593]: time="2026-01-20T02:37:39.501312283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ljpvx,Uid:1fc1c543-c636-462a-bf34-bcc6ee8e350e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1\"" Jan 20 02:37:39.695869 containerd[1593]: time="2026-01-20T02:37:39.695818610Z" level=info msg="CreateContainer within sandbox \"f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:37:39.943717 containerd[1593]: time="2026-01-20T02:37:39.903157077Z" level=info msg="Container c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:39.914760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958685353.mount: Deactivated successfully. Jan 20 02:37:40.310890 containerd[1593]: time="2026-01-20T02:37:40.310822899Z" level=info msg="CreateContainer within sandbox \"e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450\"" Jan 20 02:37:40.502298 containerd[1593]: time="2026-01-20T02:37:40.500121830Z" level=info msg="StartContainer for \"c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450\"" Jan 20 02:37:40.678344 containerd[1593]: time="2026-01-20T02:37:40.656715178Z" level=info msg="connecting to shim c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450" address="unix:///run/containerd/s/7e1a7b95abc308f2213fe4063561e2413f19003b463271daa2274f40aef87977" protocol=ttrpc version=3 Jan 20 02:37:41.041714 containerd[1593]: time="2026-01-20T02:37:41.041643928Z" level=info msg="Container 86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:41.078062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370547704.mount: Deactivated successfully. Jan 20 02:37:41.582160 kubelet[3064]: E0120 02:37:41.488383 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.153s" Jan 20 02:37:41.766095 containerd[1593]: time="2026-01-20T02:37:41.765898340Z" level=info msg="CreateContainer within sandbox \"f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837\"" Jan 20 02:37:41.797354 containerd[1593]: time="2026-01-20T02:37:41.790803740Z" level=info msg="StartContainer for \"86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837\"" Jan 20 02:37:41.908723 containerd[1593]: time="2026-01-20T02:37:41.908168993Z" level=info msg="connecting to shim 86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837" address="unix:///run/containerd/s/dce0d1dbb78ea5a3adf728d5acc804f82da4571d8e2610c26c594a7e325b1c71" protocol=ttrpc version=3 Jan 20 02:37:42.400766 systemd[1]: Started cri-containerd-c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450.scope - libcontainer container c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450. Jan 20 02:37:42.663386 systemd[1]: Started cri-containerd-86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837.scope - libcontainer container 86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837. Jan 20 02:37:46.644079 containerd[1593]: time="2026-01-20T02:37:46.639044853Z" level=info msg="StartContainer for \"c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450\" returns successfully" Jan 20 02:37:46.646324 containerd[1593]: time="2026-01-20T02:37:46.645620768Z" level=info msg="StartContainer for \"86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837\" returns successfully" Jan 20 02:37:47.105334 kubelet[3064]: I0120 02:37:47.105014 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ljpvx" podStartSLOduration=129.104872611 podStartE2EDuration="2m9.104872611s" podCreationTimestamp="2026-01-20 02:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:37:47.081229173 +0000 UTC m=+188.488308981" watchObservedRunningTime="2026-01-20 02:37:47.104872611 +0000 UTC m=+188.511952390" Jan 20 02:37:48.933380 kubelet[3064]: I0120 02:37:48.933180 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2hpvp" podStartSLOduration=129.933150207 podStartE2EDuration="2m9.933150207s" podCreationTimestamp="2026-01-20 02:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:37:47.842093841 +0000 UTC m=+189.249173641" watchObservedRunningTime="2026-01-20 02:37:48.933150207 +0000 UTC m=+190.340229996" Jan 20 02:38:09.133319 kernel: sched: DL replenish lagged too much Jan 20 02:38:18.893377 update_engine[1574]: I20260120 02:38:18.814292 1574 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:38:18.893377 update_engine[1574]: I20260120 02:38:18.884606 1574 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:38:18.893377 update_engine[1574]: I20260120 02:38:18.953191 1574 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:18.996614 1574 omaha_request_params.cc:62] Current group set to stable Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.075147 1574 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.075204 1574 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.075623 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.079370 1574 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.080040 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.080065 1574 omaha_request_action.cc:272] Request: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.080076 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.465149 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.727741 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:38:20.857567 update_engine[1574]: E20260120 02:38:19.775947 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:38:20.857567 update_engine[1574]: I20260120 02:38:19.776301 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:38:21.067287 systemd[1]: cri-containerd-fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5.scope: Deactivated successfully. Jan 20 02:38:21.093153 systemd[1]: cri-containerd-fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5.scope: Consumed 12.566s CPU time, 46M memory peak. Jan 20 02:38:21.136030 systemd[1]: cri-containerd-eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8.scope: Deactivated successfully. Jan 20 02:38:21.153383 systemd[1]: cri-containerd-eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8.scope: Consumed 14.694s CPU time, 22.5M memory peak. Jan 20 02:38:21.253586 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:38:21.666168 containerd[1593]: time="2026-01-20T02:38:21.042129404Z" level=warning msg="container event discarded" container=31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3 type=CONTAINER_CREATED_EVENT Jan 20 02:38:21.666168 containerd[1593]: time="2026-01-20T02:38:21.648320665Z" level=warning msg="container event discarded" container=31f9829743516def5653a55952b9e4870211dedc3dfc46a12b7f862ef4b3a7a3 type=CONTAINER_STARTED_EVENT Jan 20 02:38:21.806553 containerd[1593]: time="2026-01-20T02:38:21.762205439Z" level=info msg="received container exit event container_id:\"fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5\" id:\"fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5\" pid:3130 exit_status:1 exited_at:{seconds:1768876700 nanos:514067158}" Jan 20 02:38:22.037627 containerd[1593]: time="2026-01-20T02:38:21.992679133Z" level=info msg="received container exit event container_id:\"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\" id:\"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\" pid:2886 exit_status:1 exited_at:{seconds:1768876700 nanos:574665345}" Jan 20 02:38:22.873246 systemd-tmpfiles[4126]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:38:22.873279 systemd-tmpfiles[4126]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:38:22.897580 systemd-tmpfiles[4126]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:38:22.931625 systemd-tmpfiles[4126]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:38:22.999705 systemd-tmpfiles[4126]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:38:23.012143 systemd-tmpfiles[4126]: ACLs are not supported, ignoring. Jan 20 02:38:23.012260 systemd-tmpfiles[4126]: ACLs are not supported, ignoring. Jan 20 02:38:23.306219 systemd-tmpfiles[4126]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:38:23.306239 systemd-tmpfiles[4126]: Skipping /boot Jan 20 02:38:23.478642 kubelet[3064]: E0120 02:38:23.458607 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.534s" Jan 20 02:38:23.654349 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:38:23.669367 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:38:23.902150 containerd[1593]: time="2026-01-20T02:38:23.888718150Z" level=warning msg="container event discarded" container=398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1 type=CONTAINER_CREATED_EVENT Jan 20 02:38:23.902902 kubelet[3064]: E0120 02:38:23.900109 3064 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice/cri-containerd-88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3.scope: task 88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 not found: not found Jan 20 02:38:23.896668 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958002719Z" level=warning msg="container event discarded" container=398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1 type=CONTAINER_STARTED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958063982Z" level=warning msg="container event discarded" container=a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406 type=CONTAINER_CREATED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958077247Z" level=warning msg="container event discarded" container=a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406 type=CONTAINER_STARTED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958086173Z" level=warning msg="container event discarded" container=51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c type=CONTAINER_CREATED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958098897Z" level=warning msg="container event discarded" container=8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3 type=CONTAINER_CREATED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958108685Z" level=warning msg="container event discarded" container=eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8 type=CONTAINER_CREATED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958121339Z" level=warning msg="container event discarded" container=51a667e87d2105db138e73e9e3d07249b6f5564f913ea2ba16495df185476c3c type=CONTAINER_STARTED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958129484Z" level=warning msg="container event discarded" container=8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3 type=CONTAINER_STARTED_EVENT Jan 20 02:38:24.000295 containerd[1593]: time="2026-01-20T02:38:23.958138270Z" level=warning msg="container event discarded" container=eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8 type=CONTAINER_STARTED_EVENT Jan 20 02:38:24.052022 locksmithd[1632]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:38:25.518330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5-rootfs.mount: Deactivated successfully. Jan 20 02:38:26.275887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8-rootfs.mount: Deactivated successfully. Jan 20 02:38:27.166593 kubelet[3064]: I0120 02:38:27.166289 3064 scope.go:117] "RemoveContainer" containerID="88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3" Jan 20 02:38:27.167379 kubelet[3064]: I0120 02:38:27.167059 3064 scope.go:117] "RemoveContainer" containerID="fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5" Jan 20 02:38:27.167694 kubelet[3064]: E0120 02:38:27.167361 3064 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(73f4d0ebfe2f50199eb060021cc3bcbf)\"" pod="kube-system/kube-controller-manager-localhost" podUID="73f4d0ebfe2f50199eb060021cc3bcbf" Jan 20 02:38:27.247996 containerd[1593]: time="2026-01-20T02:38:27.232316401Z" level=info msg="RemoveContainer for \"88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3\"" Jan 20 02:38:27.248993 kubelet[3064]: I0120 02:38:27.239816 3064 scope.go:117] "RemoveContainer" containerID="eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8" Jan 20 02:38:27.272240 containerd[1593]: time="2026-01-20T02:38:27.251372760Z" level=info msg="CreateContainer within sandbox \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 02:38:27.390938 containerd[1593]: time="2026-01-20T02:38:27.350834966Z" level=info msg="RemoveContainer for \"88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3\" returns successfully" Jan 20 02:38:27.447395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613717593.mount: Deactivated successfully. Jan 20 02:38:27.618280 containerd[1593]: time="2026-01-20T02:38:27.618010363Z" level=info msg="Container 3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:27.872007 containerd[1593]: time="2026-01-20T02:38:27.871949514Z" level=info msg="CreateContainer within sandbox \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971\"" Jan 20 02:38:27.891184 containerd[1593]: time="2026-01-20T02:38:27.891134361Z" level=info msg="StartContainer for \"3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971\"" Jan 20 02:38:27.923709 containerd[1593]: time="2026-01-20T02:38:27.923314034Z" level=info msg="connecting to shim 3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971" address="unix:///run/containerd/s/cf37a2070286bfbbfcc7c63fe29a7aa6bc535ad25ea209e8ed2853964177c2fc" protocol=ttrpc version=3 Jan 20 02:38:28.403369 systemd[1]: Started cri-containerd-3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971.scope - libcontainer container 3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971. Jan 20 02:38:29.568209 containerd[1593]: time="2026-01-20T02:38:29.559384633Z" level=info msg="StartContainer for \"3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971\" returns successfully" Jan 20 02:38:29.600652 update_engine[1574]: I20260120 02:38:29.600569 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:38:29.601390 update_engine[1574]: I20260120 02:38:29.601351 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:38:29.677163 update_engine[1574]: I20260120 02:38:29.677095 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:38:29.704336 update_engine[1574]: E20260120 02:38:29.704265 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:38:29.713663 update_engine[1574]: I20260120 02:38:29.705688 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:38:34.037260 kubelet[3064]: I0120 02:38:34.034232 3064 scope.go:117] "RemoveContainer" containerID="fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5" Jan 20 02:38:34.259371 containerd[1593]: time="2026-01-20T02:38:34.241003684Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jan 20 02:38:35.299978 containerd[1593]: time="2026-01-20T02:38:35.299681705Z" level=info msg="Container fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:35.579671 containerd[1593]: time="2026-01-20T02:38:35.579316747Z" level=info msg="CreateContainer within sandbox \"398b9e53cfe873139de21a8277a92d40db7ece83982388d14525ab4c22b1f3d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480\"" Jan 20 02:38:35.672269 containerd[1593]: time="2026-01-20T02:38:35.634241937Z" level=info msg="StartContainer for \"fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480\"" Jan 20 02:38:35.729361 containerd[1593]: time="2026-01-20T02:38:35.729303368Z" level=info msg="connecting to shim fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480" address="unix:///run/containerd/s/98f14c115b348dcb074877b683a3cedb9d01dbbe6f5f5a9daeb8c0026a4ef212" protocol=ttrpc version=3 Jan 20 02:38:36.976337 systemd[1]: Started cri-containerd-fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480.scope - libcontainer container fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480. Jan 20 02:38:39.602851 update_engine[1574]: I20260120 02:38:39.600320 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:38:39.602851 update_engine[1574]: I20260120 02:38:39.600833 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:38:39.632538 update_engine[1574]: I20260120 02:38:39.614639 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:38:39.671773 update_engine[1574]: E20260120 02:38:39.656200 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:38:39.671773 update_engine[1574]: I20260120 02:38:39.659850 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:38:40.069319 containerd[1593]: time="2026-01-20T02:38:40.069223814Z" level=info msg="StartContainer for \"fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480\" returns successfully" Jan 20 02:38:49.612041 update_engine[1574]: I20260120 02:38:49.611949 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:38:49.616991 update_engine[1574]: I20260120 02:38:49.616950 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:38:49.635346 update_engine[1574]: I20260120 02:38:49.635288 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:38:49.665088 update_engine[1574]: E20260120 02:38:49.664925 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:38:49.665606 update_engine[1574]: I20260120 02:38:49.665396 1574 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:38:49.665832 update_engine[1574]: I20260120 02:38:49.665804 1574 omaha_request_action.cc:617] Omaha request response: Jan 20 02:38:49.666005 update_engine[1574]: E20260120 02:38:49.665985 1574 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:38:49.666184 update_engine[1574]: I20260120 02:38:49.666163 1574 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:38:49.677076 update_engine[1574]: I20260120 02:38:49.666234 1574 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:38:49.677076 update_engine[1574]: I20260120 02:38:49.666247 1574 update_attempter.cc:306] Processing Done. Jan 20 02:38:49.677076 update_engine[1574]: E20260120 02:38:49.666550 1574 update_attempter.cc:619] Update failed. Jan 20 02:38:49.677387 update_engine[1574]: I20260120 02:38:49.677340 1574 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:38:49.688358 update_engine[1574]: I20260120 02:38:49.677635 1574 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:38:49.733215 update_engine[1574]: I20260120 02:38:49.698597 1574 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:38:49.743080 update_engine[1574]: I20260120 02:38:49.743009 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:38:49.743880 update_engine[1574]: I20260120 02:38:49.743767 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:38:49.773080 locksmithd[1632]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:38:49.775147 update_engine[1574]: I20260120 02:38:49.757125 1574 omaha_request_action.cc:272] Request: Jan 20 02:38:49.775147 update_engine[1574]: Jan 20 02:38:49.775147 update_engine[1574]: Jan 20 02:38:49.775147 update_engine[1574]: Jan 20 02:38:49.775147 update_engine[1574]: Jan 20 02:38:49.775147 update_engine[1574]: Jan 20 02:38:49.775147 update_engine[1574]: Jan 20 02:38:49.775147 update_engine[1574]: I20260120 02:38:49.757174 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:38:49.775147 update_engine[1574]: I20260120 02:38:49.757233 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:38:49.787132 update_engine[1574]: I20260120 02:38:49.787029 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:38:49.835959 update_engine[1574]: E20260120 02:38:49.835879 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856105 1574 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856161 1574 omaha_request_action.cc:617] Omaha request response: Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856177 1574 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856185 1574 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856194 1574 update_attempter.cc:306] Processing Done. Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856205 1574 update_attempter.cc:310] Error event sent. Jan 20 02:38:49.856761 update_engine[1574]: I20260120 02:38:49.856315 1574 update_check_scheduler.cc:74] Next update check in 48m53s Jan 20 02:38:50.063234 locksmithd[1632]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:39:15.502243 containerd[1593]: time="2026-01-20T02:39:15.490817966Z" level=warning msg="container event discarded" container=8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3 type=CONTAINER_STOPPED_EVENT Jan 20 02:39:22.464011 containerd[1593]: time="2026-01-20T02:39:22.458886485Z" level=warning msg="container event discarded" container=88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 type=CONTAINER_CREATED_EVENT Jan 20 02:39:42.214899 containerd[1593]: time="2026-01-20T02:39:42.209393055Z" level=warning msg="container event discarded" container=8bab15c3510056f383bed5e179cae6831c9037f1ab171f52061572cdd4a567d3 type=CONTAINER_DELETED_EVENT Jan 20 02:39:57.633964 containerd[1593]: time="2026-01-20T02:39:57.623704897Z" level=warning msg="container event discarded" container=fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5 type=CONTAINER_CREATED_EVENT Jan 20 02:40:00.781292 containerd[1593]: time="2026-01-20T02:40:00.781212444Z" level=warning msg="container event discarded" container=fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5 type=CONTAINER_STARTED_EVENT Jan 20 02:40:45.288994 containerd[1593]: time="2026-01-20T02:40:45.282528646Z" level=warning msg="container event discarded" container=a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de type=CONTAINER_CREATED_EVENT Jan 20 02:40:45.288994 containerd[1593]: time="2026-01-20T02:40:45.284888654Z" level=warning msg="container event discarded" container=a3d31a0acaa6242dc599dbfc702beaf61c31b75e7407aa8cb99008bb1ec062de type=CONTAINER_STARTED_EVENT Jan 20 02:40:51.874716 containerd[1593]: time="2026-01-20T02:40:51.874378787Z" level=warning msg="container event discarded" container=369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e type=CONTAINER_CREATED_EVENT Jan 20 02:40:57.446675 containerd[1593]: time="2026-01-20T02:40:57.446307144Z" level=warning msg="container event discarded" container=fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882 type=CONTAINER_CREATED_EVENT Jan 20 02:40:57.478919 containerd[1593]: time="2026-01-20T02:40:57.447092485Z" level=warning msg="container event discarded" container=fcb728ddf4b6274237750c4737d17bce6ce0c7fe0fbdacd7e3d15f330acee882 type=CONTAINER_STARTED_EVENT Jan 20 02:41:01.806330 containerd[1593]: time="2026-01-20T02:41:01.794172911Z" level=warning msg="container event discarded" container=369386a52b6265708c52b843e1820be269ddc3df559471ae7ba1ac7a5eb39a8e type=CONTAINER_STARTED_EVENT Jan 20 02:41:14.812353 containerd[1593]: time="2026-01-20T02:41:14.812246400Z" level=warning msg="container event discarded" container=bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3 type=CONTAINER_CREATED_EVENT Jan 20 02:41:17.801886 containerd[1593]: time="2026-01-20T02:41:17.800965276Z" level=warning msg="container event discarded" container=bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3 type=CONTAINER_STARTED_EVENT Jan 20 02:41:19.080856 containerd[1593]: time="2026-01-20T02:41:19.077964253Z" level=warning msg="container event discarded" container=bf7892c8db41649da9f82fd6ed49913aad424294d97acab283ba8e5f88a3b6e3 type=CONTAINER_STOPPED_EVENT Jan 20 02:41:29.319891 kubelet[3064]: E0120 02:41:29.319383 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:30.347835 kubelet[3064]: E0120 02:41:30.341801 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:57.323587 kubelet[3064]: E0120 02:41:57.313354 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:00.379908 kubelet[3064]: E0120 02:42:00.331140 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:01.902744 containerd[1593]: time="2026-01-20T02:42:01.901973250Z" level=warning msg="container event discarded" container=d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a type=CONTAINER_CREATED_EVENT Jan 20 02:42:03.834239 containerd[1593]: time="2026-01-20T02:42:03.833586970Z" level=warning msg="container event discarded" container=d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a type=CONTAINER_STARTED_EVENT Jan 20 02:42:05.526825 containerd[1593]: time="2026-01-20T02:42:05.526626275Z" level=warning msg="container event discarded" container=d5ae3887f02dee592ab6bd115e3c46f5b67578a78f28c1518b26a0f8f977b69a type=CONTAINER_STOPPED_EVENT Jan 20 02:42:07.594227 containerd[1593]: time="2026-01-20T02:42:07.594153436Z" level=warning msg="container event discarded" container=e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f type=CONTAINER_CREATED_EVENT Jan 20 02:42:10.342988 containerd[1593]: time="2026-01-20T02:42:10.296631823Z" level=warning msg="container event discarded" container=e7f31c0b0c7fb45a703ac21091fa871ee040f636f5a843d7191826197811a61f type=CONTAINER_STARTED_EVENT Jan 20 02:42:11.918018 kubelet[3064]: E0120 02:42:11.915998 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:27.323143 kubelet[3064]: E0120 02:42:27.323101 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:34.385852 kubelet[3064]: E0120 02:42:34.377912 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:36.343794 kubelet[3064]: E0120 02:42:36.341605 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:39.031927 containerd[1593]: time="2026-01-20T02:42:39.029747207Z" level=warning msg="container event discarded" container=e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04 type=CONTAINER_CREATED_EVENT Jan 20 02:42:39.031927 containerd[1593]: time="2026-01-20T02:42:39.029832596Z" level=warning msg="container event discarded" container=e9172f0e3e4429035e1bae6ba4e86d796f8064643cb0d24e59b3b35900defb04 type=CONTAINER_STARTED_EVENT Jan 20 02:42:39.528880 containerd[1593]: time="2026-01-20T02:42:39.526052618Z" level=warning msg="container event discarded" container=f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1 type=CONTAINER_CREATED_EVENT Jan 20 02:42:39.528880 containerd[1593]: time="2026-01-20T02:42:39.526120125Z" level=warning msg="container event discarded" container=f65c4528b50340a5dcf7892e7d70db2234d44c5e98917d70f97d8565f3610dc1 type=CONTAINER_STARTED_EVENT Jan 20 02:42:40.312363 containerd[1593]: time="2026-01-20T02:42:40.312278173Z" level=warning msg="container event discarded" container=c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450 type=CONTAINER_CREATED_EVENT Jan 20 02:42:41.641917 containerd[1593]: time="2026-01-20T02:42:41.637181138Z" level=warning msg="container event discarded" container=86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837 type=CONTAINER_CREATED_EVENT Jan 20 02:42:46.598283 containerd[1593]: time="2026-01-20T02:42:46.598175321Z" level=warning msg="container event discarded" container=86c660218f8f3bb4f69ae388ab95794f9fa9f8c7d98cda63deeecad743706837 type=CONTAINER_STARTED_EVENT Jan 20 02:42:46.682984 containerd[1593]: time="2026-01-20T02:42:46.682907545Z" level=warning msg="container event discarded" container=c36b9c423a918cc215a0719f16bf6a9224cdca0ab5f85135cd6313cf4b9ee450 type=CONTAINER_STARTED_EVENT Jan 20 02:42:47.341917 kubelet[3064]: E0120 02:42:47.327324 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:07.738824 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:48174.service - OpenSSH per-connection server daemon (10.0.0.1:48174). Jan 20 02:43:08.573834 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 48174 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:43:08.601067 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:08.721037 systemd-logind[1567]: New session 10 of user core. Jan 20 02:43:08.844142 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:43:10.994628 sshd[5276]: Connection closed by 10.0.0.1 port 48174 Jan 20 02:43:10.994236 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:11.072028 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:48174.service: Deactivated successfully. Jan 20 02:43:11.103233 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:43:11.125283 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:43:11.194588 systemd-logind[1567]: Removed session 10. Jan 20 02:43:11.316045 kubelet[3064]: E0120 02:43:11.312078 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:16.468083 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:37152.service - OpenSSH per-connection server daemon (10.0.0.1:37152). Jan 20 02:43:17.582286 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:43:17.617165 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:17.939592 systemd-logind[1567]: New session 11 of user core. Jan 20 02:43:18.140247 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:43:19.577322 sshd[5334]: Connection closed by 10.0.0.1 port 37152 Jan 20 02:43:19.575293 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:19.606137 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:37152.service: Deactivated successfully. Jan 20 02:43:19.629226 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:43:19.671040 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:43:19.734804 systemd-logind[1567]: Removed session 11. Jan 20 02:43:24.737575 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:41790.service - OpenSSH per-connection server daemon (10.0.0.1:41790). Jan 20 02:43:25.666168 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 41790 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:43:25.690095 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:25.811975 systemd-logind[1567]: New session 12 of user core. Jan 20 02:43:25.872368 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:43:26.156999 containerd[1593]: time="2026-01-20T02:43:26.156394617Z" level=warning msg="container event discarded" container=fc9ad64bfc7d734c41f45bdbed34210b3a5a0891e7db16d5755e85f588d7cdf5 type=CONTAINER_STOPPED_EVENT Jan 20 02:43:26.319155 kubelet[3064]: E0120 02:43:26.319107 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:26.730606 containerd[1593]: time="2026-01-20T02:43:26.730347869Z" level=warning msg="container event discarded" container=eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8 type=CONTAINER_STOPPED_EVENT Jan 20 02:43:27.391147 containerd[1593]: time="2026-01-20T02:43:27.391077987Z" level=warning msg="container event discarded" container=88ecc1ddc827fc38a83084dff51ec7f2f80365b7ef038a6bb4084f713cda36f3 type=CONTAINER_DELETED_EVENT Jan 20 02:43:27.890884 containerd[1593]: time="2026-01-20T02:43:27.862108603Z" level=warning msg="container event discarded" container=3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971 type=CONTAINER_CREATED_EVENT Jan 20 02:43:28.018291 sshd[5376]: Connection closed by 10.0.0.1 port 41790 Jan 20 02:43:28.043230 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:28.202642 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:41790.service: Deactivated successfully. Jan 20 02:43:28.327027 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:43:28.403042 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:43:28.485311 systemd-logind[1567]: Removed session 12. Jan 20 02:43:29.450866 containerd[1593]: time="2026-01-20T02:43:29.383077911Z" level=warning msg="container event discarded" container=3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971 type=CONTAINER_STARTED_EVENT Jan 20 02:43:33.325004 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:41804.service - OpenSSH per-connection server daemon (10.0.0.1:41804). Jan 20 02:43:34.686690 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 41804 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:43:34.745250 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:35.183262 systemd-logind[1567]: New session 13 of user core. Jan 20 02:43:35.341394 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:43:35.527044 containerd[1593]: time="2026-01-20T02:43:35.497365627Z" level=warning msg="container event discarded" container=fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480 type=CONTAINER_CREATED_EVENT Jan 20 02:43:37.336189 kubelet[3064]: E0120 02:43:37.332322 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:38.425022 sshd[5425]: Connection closed by 10.0.0.1 port 41804 Jan 20 02:43:38.512615 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:38.778002 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:41804.service: Deactivated successfully. Jan 20 02:43:38.876076 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:43:38.953134 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:43:38.975267 systemd-logind[1567]: Removed session 13. Jan 20 02:43:40.073624 containerd[1593]: time="2026-01-20T02:43:40.069926150Z" level=warning msg="container event discarded" container=fcab712b81c638720e11e82a0fab172f607bf1beab106d7dbd1ae6ba309b2480 type=CONTAINER_STARTED_EVENT Jan 20 02:43:43.550292 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:33426.service - OpenSSH per-connection server daemon (10.0.0.1:33426). Jan 20 02:43:44.268598 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 33426 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:43:44.298242 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:44.641233 systemd-logind[1567]: New session 14 of user core. Jan 20 02:43:44.811169 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:43:47.870130 sshd[5489]: Connection closed by 10.0.0.1 port 33426 Jan 20 02:43:47.875117 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:48.066101 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:33426.service: Deactivated successfully. Jan 20 02:43:48.112295 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:43:48.188194 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:43:48.277209 systemd-logind[1567]: Removed session 14. Jan 20 02:43:49.320669 kubelet[3064]: E0120 02:43:49.319235 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:50.354144 kubelet[3064]: E0120 02:43:50.341642 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:53.039271 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:48730.service - OpenSSH per-connection server daemon (10.0.0.1:48730). Jan 20 02:43:53.552148 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 48730 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:43:53.584714 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:53.718918 systemd-logind[1567]: New session 15 of user core. Jan 20 02:43:53.914109 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:43:56.328042 kubelet[3064]: E0120 02:43:56.313345 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:56.768756 sshd[5527]: Connection closed by 10.0.0.1 port 48730 Jan 20 02:43:56.861239 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:57.013293 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:48730.service: Deactivated successfully. Jan 20 02:43:57.111056 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:43:57.133928 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:43:57.206093 systemd-logind[1567]: Removed session 15. Jan 20 02:44:01.953315 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:51368.service - OpenSSH per-connection server daemon (10.0.0.1:51368). Jan 20 02:44:02.560690 kubelet[3064]: E0120 02:44:02.560388 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:03.474950 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 51368 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:44:03.484202 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:03.581973 systemd-logind[1567]: New session 16 of user core. Jan 20 02:44:03.636984 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:44:05.778770 sshd[5587]: Connection closed by 10.0.0.1 port 51368 Jan 20 02:44:05.785994 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:05.836138 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:51368.service: Deactivated successfully. Jan 20 02:44:05.861245 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:44:05.919168 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:44:05.935054 systemd-logind[1567]: Removed session 16. Jan 20 02:44:11.046658 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:57600.service - OpenSSH per-connection server daemon (10.0.0.1:57600). Jan 20 02:44:13.348762 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 57600 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:44:13.399770 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:13.532273 systemd-logind[1567]: New session 17 of user core. Jan 20 02:44:13.612761 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:44:15.439106 sshd[5633]: Connection closed by 10.0.0.1 port 57600 Jan 20 02:44:15.437320 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:15.539255 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:57600.service: Deactivated successfully. Jan 20 02:44:15.595890 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:44:15.617945 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:44:15.633612 systemd-logind[1567]: Removed session 17. Jan 20 02:44:20.697135 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:49476.service - OpenSSH per-connection server daemon (10.0.0.1:49476). Jan 20 02:44:21.650566 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 49476 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:44:21.680387 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:21.825678 systemd-logind[1567]: New session 18 of user core. Jan 20 02:44:21.910675 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:44:22.394638 kubelet[3064]: E0120 02:44:22.389146 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:33.700115 kubelet[3064]: E0120 02:44:33.699916 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.393s" Jan 20 02:44:34.807913 kubelet[3064]: E0120 02:44:34.765049 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.845375 sshd[5686]: Connection closed by 10.0.0.1 port 49476 Jan 20 02:44:34.847350 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:35.013318 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:49476.service: Deactivated successfully. Jan 20 02:44:35.082717 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:44:35.091390 systemd[1]: session-18.scope: Consumed 1.467s CPU time, 17.1M memory peak. Jan 20 02:44:35.181144 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:44:35.285393 systemd-logind[1567]: Removed session 18. Jan 20 02:44:35.509796 systemd[1]: cri-containerd-3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971.scope: Deactivated successfully. Jan 20 02:44:35.513621 systemd[1]: cri-containerd-3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971.scope: Consumed 18.043s CPU time, 21.6M memory peak. Jan 20 02:44:35.567063 containerd[1593]: time="2026-01-20T02:44:35.565342148Z" level=info msg="received container exit event container_id:\"3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971\" id:\"3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971\" pid:4195 exit_status:1 exited_at:{seconds:1768877075 nanos:547601376}" Jan 20 02:44:36.381138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971-rootfs.mount: Deactivated successfully. Jan 20 02:44:37.214561 kubelet[3064]: E0120 02:44:37.205384 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:38.142193 kubelet[3064]: I0120 02:44:38.124687 3064 scope.go:117] "RemoveContainer" containerID="eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8" Jan 20 02:44:38.222165 kubelet[3064]: I0120 02:44:38.220622 3064 scope.go:117] "RemoveContainer" containerID="3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971" Jan 20 02:44:38.234274 kubelet[3064]: E0120 02:44:38.230243 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:38.249184 kubelet[3064]: E0120 02:44:38.246564 3064 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(0b8273f45c576ca70f8db6fe540c065c)\"" pod="kube-system/kube-scheduler-localhost" podUID="0b8273f45c576ca70f8db6fe540c065c" Jan 20 02:44:38.368096 containerd[1593]: time="2026-01-20T02:44:38.367143895Z" level=info msg="RemoveContainer for \"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\"" Jan 20 02:44:38.581244 containerd[1593]: time="2026-01-20T02:44:38.572931018Z" level=info msg="RemoveContainer for \"eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8\" returns successfully" Jan 20 02:44:39.938630 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:36122.service - OpenSSH per-connection server daemon (10.0.0.1:36122). Jan 20 02:44:40.595623 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 36122 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:44:40.621338 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:40.704798 systemd-logind[1567]: New session 19 of user core. Jan 20 02:44:40.790807 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:44:43.117176 sshd[5750]: Connection closed by 10.0.0.1 port 36122 Jan 20 02:44:43.115386 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:43.182094 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:36122.service: Deactivated successfully. Jan 20 02:44:43.214286 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:44:43.276350 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:44:43.322315 systemd-logind[1567]: Removed session 19. Jan 20 02:44:44.031588 kubelet[3064]: I0120 02:44:44.031142 3064 scope.go:117] "RemoveContainer" containerID="3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971" Jan 20 02:44:44.031588 kubelet[3064]: E0120 02:44:44.031255 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:44.031588 kubelet[3064]: E0120 02:44:44.031373 3064 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(0b8273f45c576ca70f8db6fe540c065c)\"" pod="kube-system/kube-scheduler-localhost" podUID="0b8273f45c576ca70f8db6fe540c065c" Jan 20 02:44:48.311666 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:39902.service - OpenSSH per-connection server daemon (10.0.0.1:39902). Jan 20 02:44:49.115598 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 39902 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:44:49.141797 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:49.247175 systemd-logind[1567]: New session 20 of user core. Jan 20 02:44:49.301351 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:44:51.048271 sshd[5802]: Connection closed by 10.0.0.1 port 39902 Jan 20 02:44:51.055674 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:51.133159 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:39902.service: Deactivated successfully. Jan 20 02:44:51.191781 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:44:51.230572 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:44:51.332230 systemd-logind[1567]: Removed session 20. Jan 20 02:44:55.310496 kubelet[3064]: I0120 02:44:55.308629 3064 scope.go:117] "RemoveContainer" containerID="3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971" Jan 20 02:44:55.310496 kubelet[3064]: E0120 02:44:55.308796 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:55.319162 kubelet[3064]: E0120 02:44:55.314313 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:55.416674 containerd[1593]: time="2026-01-20T02:44:55.404826088Z" level=info msg="CreateContainer within sandbox \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 20 02:44:55.811254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734368078.mount: Deactivated successfully. Jan 20 02:44:55.848047 containerd[1593]: time="2026-01-20T02:44:55.817816192Z" level=info msg="Container 0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:44:56.228828 systemd[1]: Started sshd@20-10.0.0.101:22-10.0.0.1:44186.service - OpenSSH per-connection server daemon (10.0.0.1:44186). Jan 20 02:44:56.410772 containerd[1593]: time="2026-01-20T02:44:56.402791051Z" level=info msg="CreateContainer within sandbox \"a7502094893993327a43b85fe17eccf60d4c4cb5aaada5cade43e8ee2601f406\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc\"" Jan 20 02:44:56.433612 containerd[1593]: time="2026-01-20T02:44:56.431217611Z" level=info msg="StartContainer for \"0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc\"" Jan 20 02:44:56.511241 containerd[1593]: time="2026-01-20T02:44:56.509298017Z" level=info msg="connecting to shim 0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc" address="unix:///run/containerd/s/cf37a2070286bfbbfcc7c63fe29a7aa6bc535ad25ea209e8ed2853964177c2fc" protocol=ttrpc version=3 Jan 20 02:44:56.885998 systemd[1]: Started cri-containerd-0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc.scope - libcontainer container 0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc. Jan 20 02:44:57.204037 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 44186 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:44:57.235693 sshd-session[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:57.338977 systemd-logind[1567]: New session 21 of user core. Jan 20 02:44:57.424267 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:44:58.600814 containerd[1593]: time="2026-01-20T02:44:58.589939571Z" level=info msg="StartContainer for \"0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc\" returns successfully" Jan 20 02:44:59.099182 kubelet[3064]: E0120 02:44:59.085168 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:59.397687 sshd[5869]: Connection closed by 10.0.0.1 port 44186 Jan 20 02:44:59.427597 sshd-session[5844]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:59.553759 systemd[1]: sshd@20-10.0.0.101:22-10.0.0.1:44186.service: Deactivated successfully. Jan 20 02:44:59.600815 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:44:59.647705 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:44:59.683193 systemd-logind[1567]: Removed session 21. Jan 20 02:45:00.111094 kubelet[3064]: E0120 02:45:00.110025 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:00.407815 kubelet[3064]: E0120 02:45:00.391138 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:04.024141 kubelet[3064]: E0120 02:45:04.023767 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:04.604254 systemd[1]: Started sshd@21-10.0.0.101:22-10.0.0.1:40358.service - OpenSSH per-connection server daemon (10.0.0.1:40358). Jan 20 02:45:05.172004 sshd[5933]: Accepted publickey for core from 10.0.0.1 port 40358 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:05.214672 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:05.327325 systemd-logind[1567]: New session 22 of user core. Jan 20 02:45:05.385729 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:45:06.140257 sshd[5936]: Connection closed by 10.0.0.1 port 40358 Jan 20 02:45:06.147716 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:06.208319 systemd[1]: sshd@21-10.0.0.101:22-10.0.0.1:40358.service: Deactivated successfully. Jan 20 02:45:06.235712 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:45:06.249711 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:45:06.274691 systemd-logind[1567]: Removed session 22. Jan 20 02:45:11.257180 systemd[1]: Started sshd@22-10.0.0.101:22-10.0.0.1:40372.service - OpenSSH per-connection server daemon (10.0.0.1:40372). Jan 20 02:45:11.830146 sshd[5973]: Accepted publickey for core from 10.0.0.1 port 40372 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:11.853886 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:11.899089 systemd-logind[1567]: New session 23 of user core. Jan 20 02:45:11.925885 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:45:13.348724 sshd[5982]: Connection closed by 10.0.0.1 port 40372 Jan 20 02:45:13.359359 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:13.386912 systemd[1]: sshd@22-10.0.0.101:22-10.0.0.1:40372.service: Deactivated successfully. Jan 20 02:45:13.447659 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:45:13.479798 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:45:13.572120 systemd-logind[1567]: Removed session 23. Jan 20 02:45:14.097557 kubelet[3064]: E0120 02:45:14.093852 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:14.849922 kubelet[3064]: E0120 02:45:14.844772 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:18.529323 systemd[1]: Started sshd@23-10.0.0.101:22-10.0.0.1:42574.service - OpenSSH per-connection server daemon (10.0.0.1:42574). Jan 20 02:45:19.320613 kubelet[3064]: E0120 02:45:19.319829 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:19.390332 sshd[6018]: Accepted publickey for core from 10.0.0.1 port 42574 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:19.412634 sshd-session[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:19.578654 systemd-logind[1567]: New session 24 of user core. Jan 20 02:45:19.629950 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:45:20.342252 kubelet[3064]: E0120 02:45:20.337718 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:21.269660 sshd[6021]: Connection closed by 10.0.0.1 port 42574 Jan 20 02:45:21.263841 sshd-session[6018]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:21.401037 systemd[1]: sshd@23-10.0.0.101:22-10.0.0.1:42574.service: Deactivated successfully. Jan 20 02:45:21.459607 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:45:21.553799 systemd-logind[1567]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:45:21.601214 systemd-logind[1567]: Removed session 24. Jan 20 02:45:26.257036 systemd[1]: Started sshd@24-10.0.0.101:22-10.0.0.1:50474.service - OpenSSH per-connection server daemon (10.0.0.1:50474). Jan 20 02:45:26.956713 sshd[6057]: Accepted publickey for core from 10.0.0.1 port 50474 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:26.969123 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:27.019385 systemd-logind[1567]: New session 25 of user core. Jan 20 02:45:27.047385 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:45:28.971655 sshd[6075]: Connection closed by 10.0.0.1 port 50474 Jan 20 02:45:28.973594 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:29.006976 systemd[1]: sshd@24-10.0.0.101:22-10.0.0.1:50474.service: Deactivated successfully. Jan 20 02:45:29.025152 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:45:29.050833 systemd-logind[1567]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:45:29.077080 systemd-logind[1567]: Removed session 25. Jan 20 02:45:34.174238 systemd[1]: Started sshd@25-10.0.0.101:22-10.0.0.1:50490.service - OpenSSH per-connection server daemon (10.0.0.1:50490). Jan 20 02:45:34.920867 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 50490 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:34.956140 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:35.164198 systemd-logind[1567]: New session 26 of user core. Jan 20 02:45:35.317189 kubelet[3064]: E0120 02:45:35.317118 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:35.317912 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:45:38.286879 sshd[6120]: Connection closed by 10.0.0.1 port 50490 Jan 20 02:45:38.303118 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:38.417952 systemd[1]: sshd@25-10.0.0.101:22-10.0.0.1:50490.service: Deactivated successfully. Jan 20 02:45:38.561024 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:45:38.667745 systemd-logind[1567]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:45:38.686839 systemd[1]: Started sshd@26-10.0.0.101:22-10.0.0.1:59924.service - OpenSSH per-connection server daemon (10.0.0.1:59924). Jan 20 02:45:38.893939 systemd-logind[1567]: Removed session 26. Jan 20 02:45:39.599085 sshd[6153]: Accepted publickey for core from 10.0.0.1 port 59924 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:39.633698 sshd-session[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:39.788092 systemd-logind[1567]: New session 27 of user core. Jan 20 02:45:39.873970 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:45:41.323621 kubelet[3064]: E0120 02:45:41.310974 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:42.993226 sshd[6160]: Connection closed by 10.0.0.1 port 59924 Jan 20 02:45:43.014149 sshd-session[6153]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:43.122348 systemd[1]: sshd@26-10.0.0.101:22-10.0.0.1:59924.service: Deactivated successfully. Jan 20 02:45:43.208365 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:45:43.243920 systemd-logind[1567]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:45:43.305248 systemd[1]: Started sshd@27-10.0.0.101:22-10.0.0.1:59926.service - OpenSSH per-connection server daemon (10.0.0.1:59926). Jan 20 02:45:43.361197 systemd-logind[1567]: Removed session 27. Jan 20 02:45:44.520029 sshd[6177]: Accepted publickey for core from 10.0.0.1 port 59926 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:44.643247 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:45.020037 systemd-logind[1567]: New session 28 of user core. Jan 20 02:45:45.086001 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:45:46.529820 sshd[6186]: Connection closed by 10.0.0.1 port 59926 Jan 20 02:45:46.532239 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:46.624983 systemd[1]: sshd@27-10.0.0.101:22-10.0.0.1:59926.service: Deactivated successfully. Jan 20 02:45:46.689104 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:45:46.757036 systemd-logind[1567]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:45:46.827366 systemd-logind[1567]: Removed session 28. Jan 20 02:45:54.502294 systemd[1]: Started sshd@28-10.0.0.101:22-10.0.0.1:39422.service - OpenSSH per-connection server daemon (10.0.0.1:39422). Jan 20 02:45:55.508352 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 39422 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:45:55.533188 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:57.873073 systemd-logind[1567]: New session 29 of user core. Jan 20 02:45:58.088390 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:46:00.860104 sshd[6242]: Connection closed by 10.0.0.1 port 39422 Jan 20 02:46:00.851128 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:01.023366 systemd-logind[1567]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:46:01.088299 systemd[1]: sshd@28-10.0.0.101:22-10.0.0.1:39422.service: Deactivated successfully. Jan 20 02:46:01.235338 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:46:01.265677 systemd-logind[1567]: Removed session 29. Jan 20 02:46:06.003996 systemd[1]: Started sshd@29-10.0.0.101:22-10.0.0.1:48996.service - OpenSSH per-connection server daemon (10.0.0.1:48996). Jan 20 02:46:06.813639 sshd[6288]: Accepted publickey for core from 10.0.0.1 port 48996 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:06.866328 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:06.976285 systemd-logind[1567]: New session 30 of user core. Jan 20 02:46:07.055344 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:46:10.003749 sshd[6297]: Connection closed by 10.0.0.1 port 48996 Jan 20 02:46:09.984972 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:10.089653 systemd[1]: sshd@29-10.0.0.101:22-10.0.0.1:48996.service: Deactivated successfully. Jan 20 02:46:10.167322 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:46:10.295225 systemd-logind[1567]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:46:10.385720 systemd-logind[1567]: Removed session 30. Jan 20 02:46:15.051763 systemd[1]: Started sshd@30-10.0.0.101:22-10.0.0.1:52732.service - OpenSSH per-connection server daemon (10.0.0.1:52732). Jan 20 02:46:15.993127 sshd[6333]: Accepted publickey for core from 10.0.0.1 port 52732 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:16.012323 sshd-session[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:16.112655 systemd-logind[1567]: New session 31 of user core. Jan 20 02:46:16.192679 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:46:17.697146 sshd[6342]: Connection closed by 10.0.0.1 port 52732 Jan 20 02:46:17.697559 sshd-session[6333]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:17.773248 systemd[1]: sshd@30-10.0.0.101:22-10.0.0.1:52732.service: Deactivated successfully. Jan 20 02:46:17.799189 systemd-logind[1567]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:46:17.881929 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:46:18.000642 systemd-logind[1567]: Removed session 31. Jan 20 02:46:18.341267 kubelet[3064]: E0120 02:46:18.328746 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:20.329282 kubelet[3064]: E0120 02:46:20.325158 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:22.845961 systemd[1]: Started sshd@31-10.0.0.101:22-10.0.0.1:52740.service - OpenSSH per-connection server daemon (10.0.0.1:52740). Jan 20 02:46:23.392194 sshd[6376]: Accepted publickey for core from 10.0.0.1 port 52740 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:23.430012 sshd-session[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:23.580556 systemd-logind[1567]: New session 32 of user core. Jan 20 02:46:23.630648 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:46:25.184761 sshd[6394]: Connection closed by 10.0.0.1 port 52740 Jan 20 02:46:25.196651 sshd-session[6376]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:25.262997 systemd[1]: sshd@31-10.0.0.101:22-10.0.0.1:52740.service: Deactivated successfully. Jan 20 02:46:25.296242 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:46:25.308995 systemd-logind[1567]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:46:25.334854 systemd-logind[1567]: Removed session 32. Jan 20 02:46:30.295637 systemd[1]: Started sshd@32-10.0.0.101:22-10.0.0.1:41666.service - OpenSSH per-connection server daemon (10.0.0.1:41666). Jan 20 02:46:30.666108 sshd[6429]: Accepted publickey for core from 10.0.0.1 port 41666 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:30.675702 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:30.757600 systemd-logind[1567]: New session 33 of user core. Jan 20 02:46:30.823685 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:46:31.785303 sshd[6432]: Connection closed by 10.0.0.1 port 41666 Jan 20 02:46:31.788921 sshd-session[6429]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:31.902332 systemd[1]: sshd@32-10.0.0.101:22-10.0.0.1:41666.service: Deactivated successfully. Jan 20 02:46:31.946284 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:46:31.996151 systemd-logind[1567]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:46:32.040376 systemd-logind[1567]: Removed session 33. Jan 20 02:46:32.334665 kubelet[3064]: E0120 02:46:32.333915 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:36.321174 kubelet[3064]: E0120 02:46:36.318788 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:36.900672 systemd[1]: Started sshd@33-10.0.0.101:22-10.0.0.1:39700.service - OpenSSH per-connection server daemon (10.0.0.1:39700). Jan 20 02:46:37.390636 sshd[6472]: Accepted publickey for core from 10.0.0.1 port 39700 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:37.387364 sshd-session[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:37.459273 systemd-logind[1567]: New session 34 of user core. Jan 20 02:46:37.494635 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:46:38.363888 sshd[6476]: Connection closed by 10.0.0.1 port 39700 Jan 20 02:46:38.370758 sshd-session[6472]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:38.423759 systemd[1]: sshd@33-10.0.0.101:22-10.0.0.1:39700.service: Deactivated successfully. Jan 20 02:46:38.456633 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:46:38.481679 systemd-logind[1567]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:46:38.537675 systemd-logind[1567]: Removed session 34. Jan 20 02:46:42.329838 kubelet[3064]: E0120 02:46:42.324887 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:43.513741 systemd[1]: Started sshd@34-10.0.0.101:22-10.0.0.1:39710.service - OpenSSH per-connection server daemon (10.0.0.1:39710). Jan 20 02:46:44.016252 sshd[6513]: Accepted publickey for core from 10.0.0.1 port 39710 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:44.034579 sshd-session[6513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:44.111579 systemd-logind[1567]: New session 35 of user core. Jan 20 02:46:44.207703 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:46:46.116666 sshd[6516]: Connection closed by 10.0.0.1 port 39710 Jan 20 02:46:46.128265 sshd-session[6513]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:46.200034 systemd[1]: sshd@34-10.0.0.101:22-10.0.0.1:39710.service: Deactivated successfully. Jan 20 02:46:46.237165 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:46:46.294888 systemd-logind[1567]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:46:46.327905 systemd-logind[1567]: Removed session 35. Jan 20 02:46:47.325660 kubelet[3064]: E0120 02:46:47.321635 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:51.250999 systemd[1]: Started sshd@35-10.0.0.101:22-10.0.0.1:40030.service - OpenSSH per-connection server daemon (10.0.0.1:40030). Jan 20 02:46:52.670270 sshd[6561]: Accepted publickey for core from 10.0.0.1 port 40030 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:46:52.703018 sshd-session[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:52.809917 systemd-logind[1567]: New session 36 of user core. Jan 20 02:46:52.875833 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:46:54.184732 sshd[6576]: Connection closed by 10.0.0.1 port 40030 Jan 20 02:46:54.175838 sshd-session[6561]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:54.233870 systemd[1]: sshd@35-10.0.0.101:22-10.0.0.1:40030.service: Deactivated successfully. Jan 20 02:46:54.263901 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:46:54.379787 systemd-logind[1567]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:46:54.404633 systemd-logind[1567]: Removed session 36. Jan 20 02:46:59.280343 systemd[1]: Started sshd@36-10.0.0.101:22-10.0.0.1:35460.service - OpenSSH per-connection server daemon (10.0.0.1:35460). Jan 20 02:47:00.211537 sshd[6611]: Accepted publickey for core from 10.0.0.1 port 35460 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:00.200165 sshd-session[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:00.326128 systemd-logind[1567]: New session 37 of user core. Jan 20 02:47:00.356577 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:47:01.747818 sshd[6614]: Connection closed by 10.0.0.1 port 35460 Jan 20 02:47:01.745186 sshd-session[6611]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:01.862948 systemd[1]: sshd@36-10.0.0.101:22-10.0.0.1:35460.service: Deactivated successfully. Jan 20 02:47:01.911571 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:47:01.972604 systemd-logind[1567]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:47:02.006944 systemd-logind[1567]: Removed session 37. Jan 20 02:47:06.798067 systemd[1]: Started sshd@37-10.0.0.101:22-10.0.0.1:52466.service - OpenSSH per-connection server daemon (10.0.0.1:52466). Jan 20 02:47:07.226245 sshd[6648]: Accepted publickey for core from 10.0.0.1 port 52466 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:07.242136 sshd-session[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:07.300894 systemd-logind[1567]: New session 38 of user core. Jan 20 02:47:07.343957 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:47:08.961287 sshd[6653]: Connection closed by 10.0.0.1 port 52466 Jan 20 02:47:08.992013 sshd-session[6648]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:09.108841 systemd[1]: sshd@37-10.0.0.101:22-10.0.0.1:52466.service: Deactivated successfully. Jan 20 02:47:09.109839 systemd-logind[1567]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:47:09.167062 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:47:09.250196 systemd-logind[1567]: Removed session 38. Jan 20 02:47:10.363893 kubelet[3064]: E0120 02:47:10.350290 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:13.616949 kubelet[3064]: E0120 02:47:13.608037 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Jan 20 02:47:14.172222 systemd[1]: Started sshd@38-10.0.0.101:22-10.0.0.1:52480.service - OpenSSH per-connection server daemon (10.0.0.1:52480). Jan 20 02:47:15.108192 sshd[6695]: Accepted publickey for core from 10.0.0.1 port 52480 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:15.141186 sshd-session[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:15.263094 systemd-logind[1567]: New session 39 of user core. Jan 20 02:47:15.293134 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:47:16.998257 sshd[6713]: Connection closed by 10.0.0.1 port 52480 Jan 20 02:47:17.010869 sshd-session[6695]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:17.077008 systemd[1]: sshd@38-10.0.0.101:22-10.0.0.1:52480.service: Deactivated successfully. Jan 20 02:47:17.105655 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:47:17.142694 systemd-logind[1567]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:47:17.188808 systemd-logind[1567]: Removed session 39. Jan 20 02:47:22.115142 systemd[1]: Started sshd@39-10.0.0.101:22-10.0.0.1:59110.service - OpenSSH per-connection server daemon (10.0.0.1:59110). Jan 20 02:47:22.312091 kubelet[3064]: E0120 02:47:22.311385 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:22.483796 sshd[6747]: Accepted publickey for core from 10.0.0.1 port 59110 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:22.500753 sshd-session[6747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:22.569699 systemd-logind[1567]: New session 40 of user core. Jan 20 02:47:22.635012 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:47:23.570225 sshd[6750]: Connection closed by 10.0.0.1 port 59110 Jan 20 02:47:23.576759 sshd-session[6747]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:23.754149 systemd[1]: sshd@39-10.0.0.101:22-10.0.0.1:59110.service: Deactivated successfully. Jan 20 02:47:23.795203 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:47:23.826663 systemd-logind[1567]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:47:23.914169 systemd[1]: Started sshd@40-10.0.0.101:22-10.0.0.1:59116.service - OpenSSH per-connection server daemon (10.0.0.1:59116). Jan 20 02:47:23.939747 systemd-logind[1567]: Removed session 40. Jan 20 02:47:24.493353 sshd[6764]: Accepted publickey for core from 10.0.0.1 port 59116 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:24.486994 sshd-session[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:24.723838 systemd-logind[1567]: New session 41 of user core. Jan 20 02:47:24.810062 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:47:29.427082 sshd[6773]: Connection closed by 10.0.0.1 port 59116 Jan 20 02:47:29.438930 sshd-session[6764]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:29.541166 systemd[1]: sshd@40-10.0.0.101:22-10.0.0.1:59116.service: Deactivated successfully. Jan 20 02:47:29.577246 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:47:29.578143 systemd[1]: session-41.scope: Consumed 1.006s CPU time, 33.9M memory peak. Jan 20 02:47:29.604370 systemd-logind[1567]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:47:29.670180 systemd[1]: Started sshd@41-10.0.0.101:22-10.0.0.1:46150.service - OpenSSH per-connection server daemon (10.0.0.1:46150). Jan 20 02:47:29.707373 systemd-logind[1567]: Removed session 41. Jan 20 02:47:30.605941 sshd[6800]: Accepted publickey for core from 10.0.0.1 port 46150 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:30.614079 sshd-session[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:30.825279 systemd-logind[1567]: New session 42 of user core. Jan 20 02:47:30.882094 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:47:32.517854 kubelet[3064]: E0120 02:47:32.499991 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:40.549737 sshd[6809]: Connection closed by 10.0.0.1 port 46150 Jan 20 02:47:40.562336 sshd-session[6800]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:40.630864 systemd[1]: sshd@41-10.0.0.101:22-10.0.0.1:46150.service: Deactivated successfully. Jan 20 02:47:40.710946 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:47:40.718850 systemd[1]: session-42.scope: Consumed 1.923s CPU time, 38.8M memory peak. Jan 20 02:47:40.737765 systemd-logind[1567]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:47:40.786955 systemd[1]: Started sshd@42-10.0.0.101:22-10.0.0.1:52710.service - OpenSSH per-connection server daemon (10.0.0.1:52710). Jan 20 02:47:40.793086 systemd-logind[1567]: Removed session 42. Jan 20 02:47:41.273364 sshd[6873]: Accepted publickey for core from 10.0.0.1 port 52710 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:41.319960 sshd-session[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:41.475698 systemd-logind[1567]: New session 43 of user core. Jan 20 02:47:41.518148 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:47:45.153377 sshd[6877]: Connection closed by 10.0.0.1 port 52710 Jan 20 02:47:45.168323 sshd-session[6873]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:45.276040 systemd[1]: sshd@42-10.0.0.101:22-10.0.0.1:52710.service: Deactivated successfully. Jan 20 02:47:45.295736 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:47:45.325252 systemd-logind[1567]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:47:45.340271 systemd-logind[1567]: Removed session 43. Jan 20 02:47:45.382267 systemd[1]: Started sshd@43-10.0.0.101:22-10.0.0.1:41814.service - OpenSSH per-connection server daemon (10.0.0.1:41814). Jan 20 02:47:46.091290 sshd[6906]: Accepted publickey for core from 10.0.0.1 port 41814 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:46.189824 sshd-session[6906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:46.349707 systemd-logind[1567]: New session 44 of user core. Jan 20 02:47:46.424185 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:47:48.117039 sshd[6915]: Connection closed by 10.0.0.1 port 41814 Jan 20 02:47:48.127819 sshd-session[6906]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:48.256020 systemd[1]: sshd@43-10.0.0.101:22-10.0.0.1:41814.service: Deactivated successfully. Jan 20 02:47:48.296282 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:47:48.353382 systemd-logind[1567]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:47:48.399685 systemd-logind[1567]: Removed session 44. Jan 20 02:47:49.315879 kubelet[3064]: E0120 02:47:49.315375 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:50.362116 kubelet[3064]: E0120 02:47:50.357968 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:52.318871 kubelet[3064]: E0120 02:47:52.318777 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:53.246845 systemd[1]: Started sshd@44-10.0.0.101:22-10.0.0.1:41818.service - OpenSSH per-connection server daemon (10.0.0.1:41818). Jan 20 02:47:53.755014 sshd[6949]: Accepted publickey for core from 10.0.0.1 port 41818 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:47:53.812289 sshd-session[6949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:53.937149 systemd-logind[1567]: New session 45 of user core. Jan 20 02:47:53.972273 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:47:55.722337 sshd[6952]: Connection closed by 10.0.0.1 port 41818 Jan 20 02:47:55.727214 sshd-session[6949]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:55.866912 systemd[1]: sshd@44-10.0.0.101:22-10.0.0.1:41818.service: Deactivated successfully. Jan 20 02:47:55.955066 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:47:55.985042 systemd-logind[1567]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:47:56.049978 systemd-logind[1567]: Removed session 45. Jan 20 02:48:00.824094 systemd[1]: Started sshd@45-10.0.0.101:22-10.0.0.1:39146.service - OpenSSH per-connection server daemon (10.0.0.1:39146). Jan 20 02:48:01.898034 sshd[6988]: Accepted publickey for core from 10.0.0.1 port 39146 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:01.933041 sshd-session[6988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:01.993652 systemd-logind[1567]: New session 46 of user core. Jan 20 02:48:02.038194 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:48:03.345371 kubelet[3064]: E0120 02:48:03.335204 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:03.637217 sshd[7008]: Connection closed by 10.0.0.1 port 39146 Jan 20 02:48:03.657801 sshd-session[6988]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:03.733936 systemd[1]: sshd@45-10.0.0.101:22-10.0.0.1:39146.service: Deactivated successfully. Jan 20 02:48:03.786667 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:48:03.842925 systemd-logind[1567]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:48:03.920083 systemd-logind[1567]: Removed session 46. Jan 20 02:48:08.970396 systemd[1]: Started sshd@46-10.0.0.101:22-10.0.0.1:53374.service - OpenSSH per-connection server daemon (10.0.0.1:53374). Jan 20 02:48:09.426631 sshd[7047]: Accepted publickey for core from 10.0.0.1 port 53374 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:09.467714 sshd-session[7047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:09.573628 systemd-logind[1567]: New session 47 of user core. Jan 20 02:48:09.599721 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:48:11.012020 sshd[7051]: Connection closed by 10.0.0.1 port 53374 Jan 20 02:48:11.015671 sshd-session[7047]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:11.083022 systemd[1]: sshd@46-10.0.0.101:22-10.0.0.1:53374.service: Deactivated successfully. Jan 20 02:48:11.114342 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:48:11.194387 systemd-logind[1567]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:48:11.251997 systemd-logind[1567]: Removed session 47. Jan 20 02:48:16.051217 systemd[1]: Started sshd@47-10.0.0.101:22-10.0.0.1:54688.service - OpenSSH per-connection server daemon (10.0.0.1:54688). Jan 20 02:48:16.317337 sshd[7085]: Accepted publickey for core from 10.0.0.1 port 54688 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:16.335089 sshd-session[7085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:16.383369 systemd-logind[1567]: New session 48 of user core. Jan 20 02:48:16.435017 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:48:17.488243 sshd[7088]: Connection closed by 10.0.0.1 port 54688 Jan 20 02:48:17.489268 sshd-session[7085]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:17.521064 systemd[1]: sshd@47-10.0.0.101:22-10.0.0.1:54688.service: Deactivated successfully. Jan 20 02:48:17.547311 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:48:17.572294 systemd-logind[1567]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:48:17.609707 systemd-logind[1567]: Removed session 48. Jan 20 02:48:22.583579 systemd[1]: Started sshd@48-10.0.0.101:22-10.0.0.1:54704.service - OpenSSH per-connection server daemon (10.0.0.1:54704). Jan 20 02:48:23.160669 sshd[7123]: Accepted publickey for core from 10.0.0.1 port 54704 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:23.168233 sshd-session[7123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:23.235904 systemd-logind[1567]: New session 49 of user core. Jan 20 02:48:23.246975 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:48:23.330909 kubelet[3064]: E0120 02:48:23.322725 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:24.247116 sshd[7132]: Connection closed by 10.0.0.1 port 54704 Jan 20 02:48:24.250719 sshd-session[7123]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:24.305952 systemd[1]: sshd@48-10.0.0.101:22-10.0.0.1:54704.service: Deactivated successfully. Jan 20 02:48:24.323615 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:48:24.343941 systemd-logind[1567]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:48:24.374558 systemd-logind[1567]: Removed session 49. Jan 20 02:48:29.343728 systemd[1]: Started sshd@49-10.0.0.101:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). Jan 20 02:48:29.972093 sshd[7167]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:29.994046 sshd-session[7167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:30.061245 systemd-logind[1567]: New session 50 of user core. Jan 20 02:48:30.092565 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:48:31.286681 sshd[7171]: Connection closed by 10.0.0.1 port 39474 Jan 20 02:48:31.333157 sshd-session[7167]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:31.392786 systemd[1]: sshd@49-10.0.0.101:22-10.0.0.1:39474.service: Deactivated successfully. Jan 20 02:48:31.432034 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:48:31.464762 systemd-logind[1567]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:48:31.493340 systemd-logind[1567]: Removed session 50. Jan 20 02:48:39.680750 systemd[1]: Started sshd@50-10.0.0.101:22-10.0.0.1:60158.service - OpenSSH per-connection server daemon (10.0.0.1:60158). Jan 20 02:48:39.731655 kubelet[3064]: E0120 02:48:39.699300 3064 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.778s" Jan 20 02:48:40.371005 kubelet[3064]: E0120 02:48:40.367176 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:40.626185 sshd[7201]: Accepted publickey for core from 10.0.0.1 port 60158 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:40.634861 sshd-session[7201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:40.683157 systemd-logind[1567]: New session 51 of user core. Jan 20 02:48:40.706584 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:48:41.692097 sshd[7225]: Connection closed by 10.0.0.1 port 60158 Jan 20 02:48:41.708664 sshd-session[7201]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:41.751197 systemd[1]: sshd@50-10.0.0.101:22-10.0.0.1:60158.service: Deactivated successfully. Jan 20 02:48:41.782726 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:48:42.074723 systemd-logind[1567]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:48:42.136171 systemd-logind[1567]: Removed session 51. Jan 20 02:48:46.823687 systemd[1]: Started sshd@51-10.0.0.101:22-10.0.0.1:56406.service - OpenSSH per-connection server daemon (10.0.0.1:56406). Jan 20 02:48:47.238951 sshd[7263]: Accepted publickey for core from 10.0.0.1 port 56406 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:47.243565 sshd-session[7263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:47.285724 systemd-logind[1567]: New session 52 of user core. Jan 20 02:48:47.332951 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:48:48.625576 sshd[7266]: Connection closed by 10.0.0.1 port 56406 Jan 20 02:48:48.627328 sshd-session[7263]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:48.677270 systemd[1]: sshd@51-10.0.0.101:22-10.0.0.1:56406.service: Deactivated successfully. Jan 20 02:48:48.692955 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:48:48.730827 systemd-logind[1567]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:48:48.737788 systemd-logind[1567]: Removed session 52. Jan 20 02:48:50.383129 kubelet[3064]: E0120 02:48:50.374822 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:53.791980 systemd[1]: Started sshd@52-10.0.0.101:22-10.0.0.1:56414.service - OpenSSH per-connection server daemon (10.0.0.1:56414). Jan 20 02:48:54.444867 sshd[7301]: Accepted publickey for core from 10.0.0.1 port 56414 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:48:54.486992 sshd-session[7301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:54.579000 systemd-logind[1567]: New session 53 of user core. Jan 20 02:48:54.659222 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:48:55.460180 kubelet[3064]: E0120 02:48:55.422755 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:56.303209 sshd[7304]: Connection closed by 10.0.0.1 port 56414 Jan 20 02:48:56.305804 sshd-session[7301]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:56.336279 systemd[1]: sshd@52-10.0.0.101:22-10.0.0.1:56414.service: Deactivated successfully. Jan 20 02:48:56.357348 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:48:56.371250 systemd-logind[1567]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:48:56.415640 systemd-logind[1567]: Removed session 53. Jan 20 02:49:00.367958 kubelet[3064]: E0120 02:49:00.367901 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:01.645763 systemd[1]: Started sshd@53-10.0.0.101:22-10.0.0.1:48394.service - OpenSSH per-connection server daemon (10.0.0.1:48394). Jan 20 02:49:02.490041 sshd[7344]: Accepted publickey for core from 10.0.0.1 port 48394 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:02.546309 sshd-session[7344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:02.696021 systemd-logind[1567]: New session 54 of user core. Jan 20 02:49:02.779716 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:49:04.556290 sshd[7361]: Connection closed by 10.0.0.1 port 48394 Jan 20 02:49:04.578017 sshd-session[7344]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:04.714391 systemd[1]: sshd@53-10.0.0.101:22-10.0.0.1:48394.service: Deactivated successfully. Jan 20 02:49:04.754713 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:49:04.818010 systemd-logind[1567]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:49:04.872061 systemd-logind[1567]: Removed session 54. Jan 20 02:49:08.434859 kubelet[3064]: E0120 02:49:08.415897 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:09.748377 systemd[1]: Started sshd@54-10.0.0.101:22-10.0.0.1:45048.service - OpenSSH per-connection server daemon (10.0.0.1:45048). Jan 20 02:49:10.673585 sshd[7398]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:10.710764 sshd-session[7398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:10.868846 systemd-logind[1567]: New session 55 of user core. Jan 20 02:49:10.913640 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:49:12.519537 sshd[7401]: Connection closed by 10.0.0.1 port 45048 Jan 20 02:49:12.568391 sshd-session[7398]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:12.662779 systemd[1]: sshd@54-10.0.0.101:22-10.0.0.1:45048.service: Deactivated successfully. Jan 20 02:49:12.730017 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:49:12.785829 systemd-logind[1567]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:49:12.829751 systemd-logind[1567]: Removed session 55. Jan 20 02:49:17.663393 systemd[1]: Started sshd@55-10.0.0.101:22-10.0.0.1:32892.service - OpenSSH per-connection server daemon (10.0.0.1:32892). Jan 20 02:49:18.372692 sshd[7442]: Accepted publickey for core from 10.0.0.1 port 32892 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:18.402907 sshd-session[7442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:18.612365 systemd-logind[1567]: New session 56 of user core. Jan 20 02:49:18.706052 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:49:21.199712 sshd[7445]: Connection closed by 10.0.0.1 port 32892 Jan 20 02:49:21.205975 sshd-session[7442]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:21.288103 systemd[1]: sshd@55-10.0.0.101:22-10.0.0.1:32892.service: Deactivated successfully. Jan 20 02:49:21.315872 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:49:21.349878 systemd-logind[1567]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:49:21.434704 systemd-logind[1567]: Removed session 56. Jan 20 02:49:23.316530 kubelet[3064]: E0120 02:49:23.313181 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:26.394990 systemd[1]: Started sshd@56-10.0.0.101:22-10.0.0.1:43592.service - OpenSSH per-connection server daemon (10.0.0.1:43592). Jan 20 02:49:26.999875 sshd[7494]: Accepted publickey for core from 10.0.0.1 port 43592 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:27.020924 sshd-session[7494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:27.102036 systemd-logind[1567]: New session 57 of user core. Jan 20 02:49:27.184852 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:49:28.238740 sshd[7497]: Connection closed by 10.0.0.1 port 43592 Jan 20 02:49:28.250852 sshd-session[7494]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:28.290784 systemd[1]: sshd@56-10.0.0.101:22-10.0.0.1:43592.service: Deactivated successfully. Jan 20 02:49:28.330378 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:49:28.389650 systemd-logind[1567]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:49:28.435191 systemd-logind[1567]: Removed session 57. Jan 20 02:49:33.402023 systemd[1]: Started sshd@57-10.0.0.101:22-10.0.0.1:43596.service - OpenSSH per-connection server daemon (10.0.0.1:43596). Jan 20 02:49:34.102189 sshd[7537]: Accepted publickey for core from 10.0.0.1 port 43596 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:34.129358 sshd-session[7537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:34.226828 systemd-logind[1567]: New session 58 of user core. Jan 20 02:49:34.265657 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:49:35.398648 sshd[7540]: Connection closed by 10.0.0.1 port 43596 Jan 20 02:49:35.404064 sshd-session[7537]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:35.592118 systemd[1]: sshd@57-10.0.0.101:22-10.0.0.1:43596.service: Deactivated successfully. Jan 20 02:49:35.677044 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:49:35.782074 systemd-logind[1567]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:49:35.858744 systemd-logind[1567]: Removed session 58. Jan 20 02:49:36.688102 containerd[1593]: time="2026-01-20T02:49:36.686351333Z" level=warning msg="container event discarded" container=3ff202166c8bde64482b5e91e22474d49141c60dab82e5196d415713af05b971 type=CONTAINER_STOPPED_EVENT Jan 20 02:49:38.589245 containerd[1593]: time="2026-01-20T02:49:38.589172149Z" level=warning msg="container event discarded" container=eb9982a01a2df58d7513b9bbb5203836b6138ac629d68d9af8997330ff2e5ec8 type=CONTAINER_DELETED_EVENT Jan 20 02:49:40.620098 systemd[1]: Started sshd@58-10.0.0.101:22-10.0.0.1:48938.service - OpenSSH per-connection server daemon (10.0.0.1:48938). Jan 20 02:49:41.491909 sshd[7577]: Accepted publickey for core from 10.0.0.1 port 48938 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:41.519699 sshd-session[7577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:41.616223 systemd-logind[1567]: New session 59 of user core. Jan 20 02:49:41.680234 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:49:43.641706 sshd[7580]: Connection closed by 10.0.0.1 port 48938 Jan 20 02:49:43.650055 sshd-session[7577]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:43.667580 systemd[1]: sshd@58-10.0.0.101:22-10.0.0.1:48938.service: Deactivated successfully. Jan 20 02:49:43.673213 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:49:43.696236 systemd-logind[1567]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:49:43.703959 systemd-logind[1567]: Removed session 59. Jan 20 02:49:44.354852 kubelet[3064]: E0120 02:49:44.354042 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:47.325147 kubelet[3064]: E0120 02:49:47.316276 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:48.840224 systemd[1]: Started sshd@59-10.0.0.101:22-10.0.0.1:52704.service - OpenSSH per-connection server daemon (10.0.0.1:52704). Jan 20 02:49:49.846951 sshd[7625]: Accepted publickey for core from 10.0.0.1 port 52704 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:49.864949 sshd-session[7625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:50.055751 systemd-logind[1567]: New session 60 of user core. Jan 20 02:49:50.127895 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:49:52.114735 sshd[7642]: Connection closed by 10.0.0.1 port 52704 Jan 20 02:49:52.116898 sshd-session[7625]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:52.178635 systemd[1]: sshd@59-10.0.0.101:22-10.0.0.1:52704.service: Deactivated successfully. Jan 20 02:49:52.247645 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:49:52.298260 systemd-logind[1567]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:49:52.315982 systemd-logind[1567]: Removed session 60. Jan 20 02:49:56.331858 containerd[1593]: time="2026-01-20T02:49:56.331777340Z" level=warning msg="container event discarded" container=0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc type=CONTAINER_CREATED_EVENT Jan 20 02:49:57.214910 systemd[1]: Started sshd@60-10.0.0.101:22-10.0.0.1:42670.service - OpenSSH per-connection server daemon (10.0.0.1:42670). Jan 20 02:49:57.893074 sshd[7678]: Accepted publickey for core from 10.0.0.1 port 42670 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:49:57.914135 sshd-session[7678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:57.988700 systemd-logind[1567]: New session 61 of user core. Jan 20 02:49:58.023203 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:49:58.523176 containerd[1593]: time="2026-01-20T02:49:58.514286538Z" level=warning msg="container event discarded" container=0abc3084789e127ba3995d46519fe5ba664721980b85502d535ebb6487151adc type=CONTAINER_STARTED_EVENT Jan 20 02:49:59.526084 sshd[7681]: Connection closed by 10.0.0.1 port 42670 Jan 20 02:49:59.527890 sshd-session[7678]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:59.660892 systemd-logind[1567]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:49:59.679311 systemd[1]: sshd@60-10.0.0.101:22-10.0.0.1:42670.service: Deactivated successfully. Jan 20 02:49:59.719858 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:49:59.748932 systemd-logind[1567]: Removed session 61. Jan 20 02:50:04.643181 systemd[1]: Started sshd@61-10.0.0.101:22-10.0.0.1:38560.service - OpenSSH per-connection server daemon (10.0.0.1:38560). Jan 20 02:50:05.305787 sshd[7721]: Accepted publickey for core from 10.0.0.1 port 38560 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:50:05.365243 sshd-session[7721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:05.433765 systemd-logind[1567]: New session 62 of user core. Jan 20 02:50:05.515062 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:50:07.712756 sshd[7724]: Connection closed by 10.0.0.1 port 38560 Jan 20 02:50:07.710198 sshd-session[7721]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:07.796006 systemd[1]: sshd@61-10.0.0.101:22-10.0.0.1:38560.service: Deactivated successfully. Jan 20 02:50:07.841321 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:50:07.885049 systemd-logind[1567]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:50:07.915382 systemd-logind[1567]: Removed session 62. Jan 20 02:50:08.350896 kubelet[3064]: E0120 02:50:08.349986 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:09.343296 kubelet[3064]: E0120 02:50:09.334192 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:10.385804 kubelet[3064]: E0120 02:50:10.364375 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:12.893956 systemd[1]: Started sshd@62-10.0.0.101:22-10.0.0.1:38576.service - OpenSSH per-connection server daemon (10.0.0.1:38576). Jan 20 02:50:13.980676 sshd[7775]: Accepted publickey for core from 10.0.0.1 port 38576 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:50:13.986717 sshd-session[7775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:14.019769 systemd-logind[1567]: New session 63 of user core. Jan 20 02:50:14.047911 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:50:14.794924 sshd[7778]: Connection closed by 10.0.0.1 port 38576 Jan 20 02:50:14.798221 sshd-session[7775]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:14.838893 systemd[1]: sshd@62-10.0.0.101:22-10.0.0.1:38576.service: Deactivated successfully. Jan 20 02:50:14.860803 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:50:14.879150 systemd-logind[1567]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:50:14.911369 systemd-logind[1567]: Removed session 63. Jan 20 02:50:19.918979 systemd[1]: Started sshd@63-10.0.0.101:22-10.0.0.1:44894.service - OpenSSH per-connection server daemon (10.0.0.1:44894). Jan 20 02:50:20.700242 sshd[7811]: Accepted publickey for core from 10.0.0.1 port 44894 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:50:20.706163 sshd-session[7811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:20.776835 systemd-logind[1567]: New session 64 of user core. Jan 20 02:50:20.808965 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:50:21.972746 sshd[7820]: Connection closed by 10.0.0.1 port 44894 Jan 20 02:50:21.975014 sshd-session[7811]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:22.023953 systemd[1]: sshd@63-10.0.0.101:22-10.0.0.1:44894.service: Deactivated successfully. Jan 20 02:50:22.064712 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:50:22.093197 systemd-logind[1567]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:50:22.113091 systemd-logind[1567]: Removed session 64. Jan 20 02:50:23.310690 kubelet[3064]: E0120 02:50:23.310114 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:26.358042 kubelet[3064]: E0120 02:50:26.355237 3064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:27.030320 systemd[1]: Started sshd@64-10.0.0.101:22-10.0.0.1:44784.service - OpenSSH per-connection server daemon (10.0.0.1:44784). Jan 20 02:50:27.559953 sshd[7854]: Accepted publickey for core from 10.0.0.1 port 44784 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:50:27.581775 sshd-session[7854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:27.629207 systemd-logind[1567]: New session 65 of user core. Jan 20 02:50:27.673830 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:50:29.629358 sshd[7857]: Connection closed by 10.0.0.1 port 44784 Jan 20 02:50:29.638187 sshd-session[7854]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:30.001352 systemd[1]: sshd@64-10.0.0.101:22-10.0.0.1:44784.service: Deactivated successfully. Jan 20 02:50:30.371667 systemd-logind[1567]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:50:30.508964 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:50:30.605263 systemd-logind[1567]: Removed session 65. Jan 20 02:50:35.011662 systemd[1]: Started sshd@65-10.0.0.101:22-10.0.0.1:34892.service - OpenSSH per-connection server daemon (10.0.0.1:34892). Jan 20 02:50:36.605864 sshd[7899]: Accepted publickey for core from 10.0.0.1 port 34892 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:50:36.612339 sshd-session[7899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:36.798755 systemd-logind[1567]: New session 66 of user core. Jan 20 02:50:36.816913 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:50:38.768389 sshd[7915]: Connection closed by 10.0.0.1 port 34892 Jan 20 02:50:38.773145 sshd-session[7899]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:38.834207 systemd[1]: sshd@65-10.0.0.101:22-10.0.0.1:34892.service: Deactivated successfully. Jan 20 02:50:38.871834 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:50:38.909222 systemd-logind[1567]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:50:38.962041 systemd-logind[1567]: Removed session 66. Jan 20 02:50:43.853910 systemd[1]: Started sshd@66-10.0.0.101:22-10.0.0.1:34894.service - OpenSSH per-connection server daemon (10.0.0.1:34894). Jan 20 02:50:44.434068 sshd[7954]: Accepted publickey for core from 10.0.0.1 port 34894 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:50:44.442819 sshd-session[7954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:44.529191 systemd-logind[1567]: New session 67 of user core. Jan 20 02:50:44.604783 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 02:50:45.971949 sshd[7957]: Connection closed by 10.0.0.1 port 34894 Jan 20 02:50:45.974017 sshd-session[7954]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:46.042286 systemd[1]: sshd@66-10.0.0.101:22-10.0.0.1:34894.service: Deactivated successfully. Jan 20 02:50:46.076384 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 02:50:46.112909 systemd-logind[1567]: Session 67 logged out. Waiting for processes to exit. Jan 20 02:50:46.125028 systemd-logind[1567]: Removed session 67.