Dec 16 03:30:22.405093 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:30:22.405127 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:30:22.405140 kernel: BIOS-provided physical RAM map: Dec 16 03:30:22.405150 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 03:30:22.405159 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 03:30:22.405172 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 03:30:22.405183 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 03:30:22.405193 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 16 03:30:22.405207 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 03:30:22.405217 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 03:30:22.405238 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 03:30:22.405248 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 03:30:22.405258 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 03:30:22.405267 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 03:30:22.405282 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 03:30:22.405293 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 03:30:22.405306 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 03:30:22.405316 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:30:22.405329 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 03:30:22.405338 kernel: NX (Execute Disable) protection: active Dec 16 03:30:22.405348 kernel: APIC: Static calls initialized Dec 16 03:30:22.405358 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 16 03:30:22.405367 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 16 03:30:22.405377 kernel: extended physical RAM map: Dec 16 03:30:22.405406 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 03:30:22.405416 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 03:30:22.405426 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 03:30:22.405437 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 03:30:22.405448 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 16 03:30:22.405462 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 16 03:30:22.405473 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 16 03:30:22.405483 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 16 03:30:22.405493 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 16 03:30:22.405504 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 03:30:22.405514 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 03:30:22.405524 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 03:30:22.405534 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 03:30:22.405545 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 03:30:22.405555 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 03:30:22.405569 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 03:30:22.405585 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 03:30:22.405596 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 03:30:22.405607 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:30:22.405619 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 03:30:22.405632 kernel: efi: EFI v2.7 by EDK II Dec 16 03:30:22.405644 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 16 03:30:22.405655 kernel: random: crng init done Dec 16 03:30:22.405666 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 03:30:22.405677 kernel: secureboot: Secure boot enabled Dec 16 03:30:22.405688 kernel: SMBIOS 2.8 present. Dec 16 03:30:22.405699 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 03:30:22.405710 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:30:22.405721 kernel: Hypervisor detected: KVM Dec 16 03:30:22.405735 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 03:30:22.405746 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:30:22.405758 kernel: kvm-clock: using sched offset of 6470474017 cycles Dec 16 03:30:22.405770 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:30:22.405782 kernel: tsc: Detected 2794.748 MHz processor Dec 16 03:30:22.405794 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:30:22.405806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:30:22.405818 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 03:30:22.405840 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 03:30:22.405858 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:30:22.405875 kernel: Using GB pages for direct mapping Dec 16 03:30:22.405890 kernel: ACPI: Early table checksum verification disabled Dec 16 03:30:22.405904 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 16 03:30:22.405919 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 03:30:22.405934 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:30:22.405948 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:30:22.405966 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 16 03:30:22.405981 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:30:22.405995 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:30:22.406010 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:30:22.406023 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:30:22.406034 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 03:30:22.406046 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 16 03:30:22.406060 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 16 03:30:22.406072 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 16 03:30:22.406084 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 16 03:30:22.406095 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 16 03:30:22.406107 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 16 03:30:22.406235 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 16 03:30:22.406247 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 16 03:30:22.406261 kernel: No NUMA configuration found Dec 16 03:30:22.406273 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 16 03:30:22.406285 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 16 03:30:22.406296 kernel: Zone ranges: Dec 16 03:30:22.406308 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:30:22.406320 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 16 03:30:22.406331 kernel: Normal empty Dec 16 03:30:22.406345 kernel: Device empty Dec 16 03:30:22.406357 kernel: Movable zone start for each node Dec 16 03:30:22.406368 kernel: Early memory node ranges Dec 16 03:30:22.406380 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 16 03:30:22.406409 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 16 03:30:22.406421 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 16 03:30:22.406432 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 16 03:30:22.406444 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 16 03:30:22.406459 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 16 03:30:22.406471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:30:22.406483 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 16 03:30:22.406494 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:30:22.406505 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 03:30:22.406517 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 03:30:22.406529 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 16 03:30:22.406544 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:30:22.406556 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:30:22.406568 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:30:22.406579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:30:22.406595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:30:22.406606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:30:22.406619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:30:22.406633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:30:22.406645 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:30:22.406657 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:30:22.406668 kernel: TSC deadline timer available Dec 16 03:30:22.406680 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:30:22.406692 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:30:22.406715 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:30:22.406729 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:30:22.406744 kernel: CPU topo: Num. cores per package: 4 Dec 16 03:30:22.406759 kernel: CPU topo: Num. threads per package: 4 Dec 16 03:30:22.406780 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 03:30:22.406796 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:30:22.406811 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 03:30:22.406826 kernel: kvm-guest: setup PV sched yield Dec 16 03:30:22.406845 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 03:30:22.406859 kernel: Booting paravirtualized kernel on KVM Dec 16 03:30:22.406873 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:30:22.406886 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 03:30:22.406898 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 03:30:22.406910 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 03:30:22.406922 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 03:30:22.406937 kernel: kvm-guest: PV spinlocks enabled Dec 16 03:30:22.406949 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:30:22.406962 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:30:22.406975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 03:30:22.406988 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:30:22.407000 kernel: Fallback order for Node 0: 0 Dec 16 03:30:22.407012 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 16 03:30:22.407027 kernel: Policy zone: DMA32 Dec 16 03:30:22.407039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:30:22.407052 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 03:30:22.407064 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:30:22.407076 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:30:22.407088 kernel: Dynamic Preempt: voluntary Dec 16 03:30:22.407100 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:30:22.407115 kernel: rcu: RCU event tracing is enabled. Dec 16 03:30:22.407128 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 03:30:22.407141 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:30:22.407153 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:30:22.407165 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:30:22.407177 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:30:22.407188 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 03:30:22.407203 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:30:22.407215 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:30:22.407240 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:30:22.407252 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 03:30:22.407265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:30:22.407277 kernel: Console: colour dummy device 80x25 Dec 16 03:30:22.407289 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:30:22.407304 kernel: ACPI: Core revision 20240827 Dec 16 03:30:22.407316 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 03:30:22.407328 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:30:22.407340 kernel: x2apic enabled Dec 16 03:30:22.407352 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:30:22.407364 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 03:30:22.407377 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 03:30:22.407406 kernel: kvm-guest: setup PV IPIs Dec 16 03:30:22.407422 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 03:30:22.407434 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 03:30:22.407447 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 03:30:22.407458 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 03:30:22.407471 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 03:30:22.407483 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 03:30:22.407495 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:30:22.407513 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:30:22.407526 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:30:22.407538 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 03:30:22.407550 kernel: active return thunk: retbleed_return_thunk Dec 16 03:30:22.407562 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 03:30:22.407574 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:30:22.407586 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:30:22.407602 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 03:30:22.407615 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 03:30:22.407627 kernel: active return thunk: srso_return_thunk Dec 16 03:30:22.407639 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 03:30:22.407652 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:30:22.407664 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:30:22.407679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:30:22.407691 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:30:22.407703 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 03:30:22.407715 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:30:22.407727 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:30:22.407739 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:30:22.407752 kernel: landlock: Up and running. Dec 16 03:30:22.407766 kernel: SELinux: Initializing. Dec 16 03:30:22.407778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 03:30:22.407790 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 03:30:22.407806 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 03:30:22.407821 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 03:30:22.407836 kernel: ... version: 0 Dec 16 03:30:22.407855 kernel: ... bit width: 48 Dec 16 03:30:22.407873 kernel: ... generic registers: 6 Dec 16 03:30:22.407888 kernel: ... value mask: 0000ffffffffffff Dec 16 03:30:22.407902 kernel: ... max period: 00007fffffffffff Dec 16 03:30:22.407915 kernel: ... fixed-purpose events: 0 Dec 16 03:30:22.407929 kernel: ... event mask: 000000000000003f Dec 16 03:30:22.407943 kernel: signal: max sigframe size: 1776 Dec 16 03:30:22.407959 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:30:22.407976 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:30:22.407999 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:30:22.408017 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:30:22.408032 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:30:22.408045 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 03:30:22.408057 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 03:30:22.408068 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 03:30:22.408081 kernel: Memory: 2425600K/2552216K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 120680K reserved, 0K cma-reserved) Dec 16 03:30:22.408097 kernel: devtmpfs: initialized Dec 16 03:30:22.408109 kernel: x86/mm: Memory block size: 128MB Dec 16 03:30:22.408121 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 16 03:30:22.408134 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 16 03:30:22.408146 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:30:22.408159 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 03:30:22.408171 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:30:22.408185 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:30:22.408198 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:30:22.408210 kernel: audit: type=2000 audit(1765855819.047:1): state=initialized audit_enabled=0 res=1 Dec 16 03:30:22.408233 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:30:22.408246 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:30:22.408258 kernel: cpuidle: using governor menu Dec 16 03:30:22.408270 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:30:22.408285 kernel: dca service started, version 1.12.1 Dec 16 03:30:22.408298 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 03:30:22.408310 kernel: PCI: Using configuration type 1 for base access Dec 16 03:30:22.408323 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:30:22.408335 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:30:22.408347 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:30:22.408359 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:30:22.408374 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:30:22.408431 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:30:22.408444 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:30:22.408456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:30:22.408468 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:30:22.408480 kernel: ACPI: Interpreter enabled Dec 16 03:30:22.408492 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:30:22.408508 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:30:22.408520 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:30:22.408533 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:30:22.408545 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 03:30:22.408557 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:30:22.408897 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:30:22.409119 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 03:30:22.409344 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 03:30:22.409360 kernel: PCI host bridge to bus 0000:00 Dec 16 03:30:22.409607 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:30:22.409806 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:30:22.409997 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:30:22.410191 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 03:30:22.410410 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 03:30:22.410620 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 03:30:22.410815 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:30:22.411087 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:30:22.411336 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:30:22.411584 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 03:30:22.411809 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 03:30:22.412029 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 03:30:22.412248 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:30:22.412490 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 03:30:22.412708 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 03:30:22.412946 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 03:30:22.413169 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 03:30:22.413417 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:30:22.413642 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 03:30:22.413883 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 03:30:22.414103 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 03:30:22.414359 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:30:22.414607 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 03:30:22.414817 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 03:30:22.415023 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 03:30:22.415242 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 03:30:22.415485 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:30:22.415696 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 03:30:22.415943 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 03:30:22.416172 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 03:30:22.416410 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 03:30:22.416650 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 03:30:22.416884 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 03:30:22.416899 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:30:22.416911 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:30:22.416923 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:30:22.416934 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:30:22.416949 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 03:30:22.416961 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 03:30:22.416972 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 03:30:22.416984 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 03:30:22.416995 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 03:30:22.417007 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 03:30:22.417018 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 03:30:22.417032 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 03:30:22.417043 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 03:30:22.417055 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 03:30:22.417066 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 03:30:22.417077 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 03:30:22.417089 kernel: iommu: Default domain type: Translated Dec 16 03:30:22.417100 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:30:22.417111 kernel: efivars: Registered efivars operations Dec 16 03:30:22.417125 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:30:22.417137 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:30:22.417148 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 16 03:30:22.417159 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 16 03:30:22.417170 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 16 03:30:22.417181 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 16 03:30:22.417193 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 16 03:30:22.417427 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 03:30:22.417634 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 03:30:22.417835 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:30:22.417849 kernel: vgaarb: loaded Dec 16 03:30:22.417861 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 03:30:22.417875 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 03:30:22.417887 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:30:22.417905 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:30:22.417919 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:30:22.417931 kernel: pnp: PnP ACPI init Dec 16 03:30:22.418156 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 03:30:22.418172 kernel: pnp: PnP ACPI: found 6 devices Dec 16 03:30:22.418184 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:30:22.418198 kernel: NET: Registered PF_INET protocol family Dec 16 03:30:22.418210 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 03:30:22.418232 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 03:30:22.418245 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:30:22.418256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:30:22.418268 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 03:30:22.418280 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 03:30:22.418294 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 03:30:22.418306 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 03:30:22.418317 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:30:22.418329 kernel: NET: Registered PF_XDP protocol family Dec 16 03:30:22.418563 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 03:30:22.418786 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 03:30:22.418997 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:30:22.419189 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:30:22.419412 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:30:22.419616 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 03:30:22.419807 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 03:30:22.419993 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 03:30:22.420008 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:30:22.420024 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 03:30:22.420036 kernel: Initialise system trusted keyrings Dec 16 03:30:22.420047 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 03:30:22.420059 kernel: Key type asymmetric registered Dec 16 03:30:22.420070 kernel: Asymmetric key parser 'x509' registered Dec 16 03:30:22.420098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:30:22.420112 kernel: io scheduler mq-deadline registered Dec 16 03:30:22.420126 kernel: io scheduler kyber registered Dec 16 03:30:22.420138 kernel: io scheduler bfq registered Dec 16 03:30:22.420150 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:30:22.420162 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 03:30:22.420174 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 03:30:22.420186 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 03:30:22.420198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:30:22.420212 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:30:22.420234 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:30:22.420246 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:30:22.420258 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:30:22.420487 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 03:30:22.420505 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:30:22.420699 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 03:30:22.420904 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T03:30:20 UTC (1765855820) Dec 16 03:30:22.421102 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 03:30:22.421132 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 03:30:22.421144 kernel: efifb: probing for efifb Dec 16 03:30:22.421155 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 03:30:22.421166 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 03:30:22.421177 kernel: efifb: scrolling: redraw Dec 16 03:30:22.421203 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 03:30:22.421216 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 03:30:22.421241 kernel: fb0: EFI VGA frame buffer device Dec 16 03:30:22.421254 kernel: pstore: Using crash dump compression: deflate Dec 16 03:30:22.421278 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 03:30:22.421290 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:30:22.421311 kernel: Segment Routing with IPv6 Dec 16 03:30:22.421322 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:30:22.421334 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:30:22.421345 kernel: Key type dns_resolver registered Dec 16 03:30:22.421356 kernel: IPI shorthand broadcast: enabled Dec 16 03:30:22.421372 kernel: sched_clock: Marking stable (2123003527, 378349579)->(2577478883, -76125777) Dec 16 03:30:22.421400 kernel: registered taskstats version 1 Dec 16 03:30:22.421413 kernel: Loading compiled-in X.509 certificates Dec 16 03:30:22.421426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:30:22.421439 kernel: Demotion targets for Node 0: null Dec 16 03:30:22.421452 kernel: Key type .fscrypt registered Dec 16 03:30:22.421465 kernel: Key type fscrypt-provisioning registered Dec 16 03:30:22.421478 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:30:22.421494 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:30:22.421507 kernel: ima: No architecture policies found Dec 16 03:30:22.421520 kernel: clk: Disabling unused clocks Dec 16 03:30:22.421533 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:30:22.421546 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:30:22.421559 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:30:22.421575 kernel: Run /init as init process Dec 16 03:30:22.421587 kernel: with arguments: Dec 16 03:30:22.421600 kernel: /init Dec 16 03:30:22.421614 kernel: with environment: Dec 16 03:30:22.421626 kernel: HOME=/ Dec 16 03:30:22.421641 kernel: TERM=linux Dec 16 03:30:22.421654 kernel: SCSI subsystem initialized Dec 16 03:30:22.421668 kernel: libata version 3.00 loaded. Dec 16 03:30:22.421897 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 03:30:22.421915 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 03:30:22.422128 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 03:30:22.422357 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 03:30:22.422600 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 03:30:22.422930 kernel: scsi host0: ahci Dec 16 03:30:22.423256 kernel: scsi host1: ahci Dec 16 03:30:22.423518 kernel: scsi host2: ahci Dec 16 03:30:22.423749 kernel: scsi host3: ahci Dec 16 03:30:22.423976 kernel: scsi host4: ahci Dec 16 03:30:22.424214 kernel: scsi host5: ahci Dec 16 03:30:22.424248 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Dec 16 03:30:22.424262 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Dec 16 03:30:22.424275 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Dec 16 03:30:22.424288 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Dec 16 03:30:22.424301 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Dec 16 03:30:22.424314 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Dec 16 03:30:22.424327 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 03:30:22.424342 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 03:30:22.424355 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 03:30:22.424368 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 03:30:22.424381 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 03:30:22.424409 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 03:30:22.424423 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 03:30:22.424435 kernel: ata3.00: applying bridge limits Dec 16 03:30:22.424451 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 03:30:22.424464 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 03:30:22.424476 kernel: ata3.00: configured for UDMA/100 Dec 16 03:30:22.424732 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 03:30:22.424968 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 03:30:22.425183 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 03:30:22.425204 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:30:22.425217 kernel: GPT:16515071 != 27000831 Dec 16 03:30:22.425242 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:30:22.425255 kernel: GPT:16515071 != 27000831 Dec 16 03:30:22.425267 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:30:22.425280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 03:30:22.425543 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 03:30:22.425566 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 03:30:22.425802 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 03:30:22.425820 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:30:22.425835 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:30:22.425848 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:30:22.425861 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:30:22.425878 kernel: raid6: avx2x4 gen() 22460 MB/s Dec 16 03:30:22.425890 kernel: raid6: avx2x2 gen() 26201 MB/s Dec 16 03:30:22.425903 kernel: raid6: avx2x1 gen() 23225 MB/s Dec 16 03:30:22.425916 kernel: raid6: using algorithm avx2x2 gen() 26201 MB/s Dec 16 03:30:22.425929 kernel: raid6: .... xor() 17102 MB/s, rmw enabled Dec 16 03:30:22.425942 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:30:22.425954 kernel: xor: automatically using best checksumming function avx Dec 16 03:30:22.425967 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:30:22.425983 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (181) Dec 16 03:30:22.425997 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:30:22.426010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:30:22.426022 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:30:22.426035 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:30:22.426048 kernel: loop: module loaded Dec 16 03:30:22.426060 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:30:22.426075 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:30:22.426090 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:30:22.426107 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:30:22.426121 systemd[1]: Detected virtualization kvm. Dec 16 03:30:22.426134 systemd[1]: Detected architecture x86-64. Dec 16 03:30:22.426149 systemd[1]: Running in initrd. Dec 16 03:30:22.426163 systemd[1]: No hostname configured, using default hostname. Dec 16 03:30:22.426177 systemd[1]: Hostname set to . Dec 16 03:30:22.426190 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:30:22.426204 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:30:22.426217 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:30:22.426240 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:30:22.426256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:30:22.426271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:30:22.426285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:30:22.426299 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:30:22.426313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:30:22.426329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:30:22.426343 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:30:22.426356 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:30:22.426369 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:30:22.426398 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:30:22.426414 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:30:22.426427 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:30:22.426444 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:30:22.426457 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:30:22.426471 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:30:22.426485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:30:22.426498 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:30:22.426512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:30:22.426526 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:30:22.426542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:30:22.426555 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:30:22.426569 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:30:22.426583 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:30:22.426596 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:30:22.426610 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:30:22.426625 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:30:22.426641 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:30:22.426654 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:30:22.426668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:30:22.426683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:30:22.426699 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:30:22.426712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:30:22.426726 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:30:22.426740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:30:22.426784 systemd-journald[315]: Collecting audit messages is enabled. Dec 16 03:30:22.426816 systemd-journald[315]: Journal started Dec 16 03:30:22.426841 systemd-journald[315]: Runtime Journal (/run/log/journal/098e61f9c0eb4034b047e393d30d3b43) is 5.9M, max 47.8M, 41.8M free. Dec 16 03:30:22.430424 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:30:22.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.435408 kernel: audit: type=1130 audit(1765855822.429:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.435683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:30:22.439115 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:30:22.442925 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:30:22.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.448418 kernel: audit: type=1130 audit(1765855822.442:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.454084 systemd-modules-load[318]: Inserted module 'br_netfilter' Dec 16 03:30:22.456304 kernel: Bridge firewalling registered Dec 16 03:30:22.462158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:30:22.465897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:30:22.470616 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:30:22.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.479417 kernel: audit: type=1130 audit(1765855822.471:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.479357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:30:22.487945 kernel: audit: type=1130 audit(1765855822.479:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.483014 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:30:22.489204 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:30:22.505851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:30:22.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.515172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:30:22.517837 kernel: audit: type=1130 audit(1765855822.505:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.524432 kernel: audit: type=1130 audit(1765855822.519:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.528965 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:30:22.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.531074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:30:22.543055 kernel: audit: type=1130 audit(1765855822.528:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.543089 kernel: audit: type=1334 audit(1765855822.529:9): prog-id=6 op=LOAD Dec 16 03:30:22.543105 kernel: audit: type=1130 audit(1765855822.542:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.529000 audit: BPF prog-id=6 op=LOAD Dec 16 03:30:22.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.536999 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:30:22.550033 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:30:22.594928 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:30:22.603353 systemd-resolved[356]: Positive Trust Anchors: Dec 16 03:30:22.603365 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:30:22.603370 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:30:22.603419 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:30:22.627359 systemd-resolved[356]: Defaulting to hostname 'linux'. Dec 16 03:30:22.632935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:30:22.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.635726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:30:22.740933 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:30:22.764421 kernel: iscsi: registered transport (tcp) Dec 16 03:30:22.791875 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:30:22.791955 kernel: QLogic iSCSI HBA Driver Dec 16 03:30:22.819627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:30:22.855459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:30:22.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.857082 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:30:22.916995 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:30:22.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.918740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:30:22.924611 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:30:22.975960 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:30:22.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:22.976000 audit: BPF prog-id=7 op=LOAD Dec 16 03:30:22.976000 audit: BPF prog-id=8 op=LOAD Dec 16 03:30:22.977981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:30:23.027855 systemd-udevd[597]: Using default interface naming scheme 'v257'. Dec 16 03:30:23.044726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:30:23.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.046282 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:30:23.076798 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:30:23.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.082000 audit: BPF prog-id=9 op=LOAD Dec 16 03:30:23.083516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:30:23.091214 dracut-pre-trigger[663]: rd.md=0: removing MD RAID activation Dec 16 03:30:23.121318 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:30:23.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.125515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:30:23.146562 systemd-networkd[702]: lo: Link UP Dec 16 03:30:23.146575 systemd-networkd[702]: lo: Gained carrier Dec 16 03:30:23.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.147265 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:30:23.149797 systemd[1]: Reached target network.target - Network. Dec 16 03:30:23.225727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:30:23.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.233093 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:30:23.312870 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 03:30:23.331412 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:30:23.334033 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 03:30:23.348409 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 03:30:23.358474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:30:23.370299 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 03:30:23.400463 kernel: AES CTR mode by8 optimization enabled Dec 16 03:30:23.371042 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:30:23.371063 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:30:23.371658 systemd-networkd[702]: eth0: Link UP Dec 16 03:30:23.394025 systemd-networkd[702]: eth0: Gained carrier Dec 16 03:30:23.394047 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:30:23.424501 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:30:23.428698 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:30:23.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.428904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:30:23.430927 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:30:23.466313 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 03:30:23.468941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:30:23.490230 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:30:23.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.525469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:30:23.525844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:30:23.526149 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:30:23.528105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:30:23.553654 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:30:23.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:23.568295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:30:23.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:24.124668 disk-uuid[837]: Primary Header is updated. Dec 16 03:30:24.124668 disk-uuid[837]: Secondary Entries is updated. Dec 16 03:30:24.124668 disk-uuid[837]: Secondary Header is updated. Dec 16 03:30:24.902752 systemd-networkd[702]: eth0: Gained IPv6LL Dec 16 03:30:25.179822 disk-uuid[853]: Warning: The kernel is still using the old partition table. Dec 16 03:30:25.179822 disk-uuid[853]: The new table will be used at the next reboot or after you Dec 16 03:30:25.179822 disk-uuid[853]: run partprobe(8) or kpartx(8) Dec 16 03:30:25.179822 disk-uuid[853]: The operation has completed successfully. Dec 16 03:30:25.189750 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:30:25.189925 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:30:25.206532 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 03:30:25.206561 kernel: audit: type=1130 audit(1765855825.193:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.206587 kernel: audit: type=1131 audit(1765855825.193:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.198336 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:30:25.244751 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Dec 16 03:30:25.244831 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:30:25.244859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:30:25.250582 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:30:25.250611 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:30:25.259682 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:30:25.262569 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:30:25.272414 kernel: audit: type=1130 audit(1765855825.264:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.266003 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:30:25.397248 ignition[882]: Ignition 2.24.0 Dec 16 03:30:25.397263 ignition[882]: Stage: fetch-offline Dec 16 03:30:25.397328 ignition[882]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:30:25.397343 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:30:25.397484 ignition[882]: parsed url from cmdline: "" Dec 16 03:30:25.397489 ignition[882]: no config URL provided Dec 16 03:30:25.397552 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:30:25.397567 ignition[882]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:30:25.405326 ignition[882]: op(1): [started] loading QEMU firmware config module Dec 16 03:30:25.405337 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 03:30:25.415013 ignition[882]: op(1): [finished] loading QEMU firmware config module Dec 16 03:30:25.415042 ignition[882]: QEMU firmware config was not found. Ignoring... Dec 16 03:30:25.446763 ignition[882]: parsing config with SHA512: cc52fa5ff0f5c53b1eb859508d2ac9ef7f5954767207373c328f4e4c778a0b0c5703577b65e0801b97f91ee1c4ee11f165f73954f28ce7539b143825634e7aa4 Dec 16 03:30:25.450565 unknown[882]: fetched base config from "system" Dec 16 03:30:25.450587 unknown[882]: fetched user config from "qemu" Dec 16 03:30:25.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.450951 ignition[882]: fetch-offline: fetch-offline passed Dec 16 03:30:25.464046 kernel: audit: type=1130 audit(1765855825.456:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.453653 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:30:25.451012 ignition[882]: Ignition finished successfully Dec 16 03:30:25.457204 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 03:30:25.458363 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:30:25.498570 ignition[892]: Ignition 2.24.0 Dec 16 03:30:25.498583 ignition[892]: Stage: kargs Dec 16 03:30:25.498771 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:30:25.498785 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:30:25.499722 ignition[892]: kargs: kargs passed Dec 16 03:30:25.499772 ignition[892]: Ignition finished successfully Dec 16 03:30:25.505942 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:30:25.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.509857 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:30:25.520893 kernel: audit: type=1130 audit(1765855825.508:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.545074 ignition[899]: Ignition 2.24.0 Dec 16 03:30:25.545088 ignition[899]: Stage: disks Dec 16 03:30:25.545271 ignition[899]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:30:25.545283 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:30:25.546103 ignition[899]: disks: disks passed Dec 16 03:30:25.546157 ignition[899]: Ignition finished successfully Dec 16 03:30:25.558468 kernel: audit: type=1130 audit(1765855825.553:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.550878 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:30:25.558615 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:30:25.562522 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:30:25.566684 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:30:25.572434 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:30:25.572533 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:30:25.608878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:30:25.646827 systemd-fsck[908]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 03:30:25.821083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:30:25.829373 kernel: audit: type=1130 audit(1765855825.820:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:25.822642 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:30:25.961433 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:30:25.962576 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:30:25.963343 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:30:25.969085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:30:25.972051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:30:25.974256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 03:30:25.974295 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:30:25.974320 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:30:25.990656 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Dec 16 03:30:25.983559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:30:26.000924 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:30:26.000952 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:30:26.000966 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:30:26.000978 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:30:25.987658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:30:26.004229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:30:26.184090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:30:26.193313 kernel: audit: type=1130 audit(1765855826.185:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:26.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:26.188059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:30:26.206109 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:30:26.218480 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:30:26.229584 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:30:26.241409 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:30:26.249499 kernel: audit: type=1130 audit(1765855826.242:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:26.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:26.255189 ignition[1014]: INFO : Ignition 2.24.0 Dec 16 03:30:26.255189 ignition[1014]: INFO : Stage: mount Dec 16 03:30:26.258160 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:30:26.258160 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:30:26.258160 ignition[1014]: INFO : mount: mount passed Dec 16 03:30:26.258160 ignition[1014]: INFO : Ignition finished successfully Dec 16 03:30:26.265508 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:30:26.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:26.270579 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:30:26.275144 kernel: audit: type=1130 audit(1765855826.268:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:26.300921 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:30:26.339248 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Dec 16 03:30:26.339303 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:30:26.339322 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:30:26.344869 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:30:26.344902 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:30:26.347530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:30:26.391096 ignition[1042]: INFO : Ignition 2.24.0 Dec 16 03:30:26.391096 ignition[1042]: INFO : Stage: files Dec 16 03:30:26.394109 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:30:26.394109 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:30:26.394109 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:30:26.394109 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:30:26.394109 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:30:26.405410 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:30:26.408778 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:30:26.412020 unknown[1042]: wrote ssh authorized keys file for user: core Dec 16 03:30:26.414158 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:30:26.418073 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:30:26.421620 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 03:30:26.464459 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:30:26.519242 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:30:26.519242 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:30:26.526631 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:30:26.646377 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:30:26.650182 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:30:26.650182 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:30:26.745324 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:30:26.745324 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:30:26.753742 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 03:30:27.038065 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:30:27.423009 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:30:27.423009 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:30:27.508707 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:30:27.801732 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:30:27.801732 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:30:27.801732 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 03:30:27.801732 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 03:30:27.822209 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 03:30:27.822209 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 03:30:27.822209 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 03:30:27.855247 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 03:30:27.863267 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 03:30:27.866108 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 03:30:27.866108 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:30:27.866108 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:30:27.866108 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:30:27.866108 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:30:27.866108 ignition[1042]: INFO : files: files passed Dec 16 03:30:27.866108 ignition[1042]: INFO : Ignition finished successfully Dec 16 03:30:27.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:27.875288 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:30:27.883704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:30:27.888584 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:30:27.903253 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:30:27.903483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:30:27.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:27.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:27.908926 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 03:30:27.911332 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:30:27.911332 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:30:27.917557 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:30:27.922436 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:30:27.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:27.923416 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:30:27.930721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:30:27.994898 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:30:27.995031 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:30:28.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.003272 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:30:28.006610 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:30:28.010206 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:30:28.011338 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:30:28.194458 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:30:28.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.196491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:30:28.221972 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:30:28.224535 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:30:28.228889 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:30:28.233410 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:30:28.236896 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:30:28.238819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:30:28.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.243643 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:30:28.247990 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:30:28.251419 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:30:28.255546 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:30:28.259841 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:30:28.264380 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:30:28.268104 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:30:28.271534 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:30:28.275740 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:30:28.279197 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:30:28.282569 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:30:28.285278 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:30:28.286992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:30:28.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.290719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:30:28.294310 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:30:28.298196 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:30:28.299744 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:30:28.304283 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:30:28.306014 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:30:28.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.309910 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:30:28.311778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:30:28.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.315923 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:30:28.318824 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:30:28.322462 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:30:28.324773 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:30:28.327311 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:30:28.330224 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:30:28.330347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:30:28.333220 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:30:28.333330 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:30:28.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.336173 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:30:28.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.336278 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:30:28.339247 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:30:28.339426 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:30:28.342237 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:30:28.342401 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:30:28.348201 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:30:28.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.355523 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:30:28.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.358461 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:30:28.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.358616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:30:28.360306 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:30:28.360459 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:30:28.365637 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:30:28.365773 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:30:28.380837 ignition[1099]: INFO : Ignition 2.24.0 Dec 16 03:30:28.380837 ignition[1099]: INFO : Stage: umount Dec 16 03:30:28.380837 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:30:28.380837 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:30:28.385227 ignition[1099]: INFO : umount: umount passed Dec 16 03:30:28.385227 ignition[1099]: INFO : Ignition finished successfully Dec 16 03:30:28.384984 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:30:28.385165 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:30:28.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.385993 systemd[1]: Stopped target network.target - Network. Dec 16 03:30:28.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.392776 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:30:28.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.392856 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:30:28.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.395941 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:30:28.396007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:30:28.399314 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:30:28.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.399425 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:30:28.402315 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:30:28.402369 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:30:28.409827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:30:28.413435 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:30:28.415913 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:30:28.416572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:30:28.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.416690 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:30:28.431050 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:30:28.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.431232 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:30:28.436994 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:30:28.437125 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:30:28.446000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:30:28.446932 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:30:28.447000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:30:28.448903 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:30:28.448949 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:30:28.456233 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:30:28.456333 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:30:28.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.456424 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:30:28.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.501632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:30:28.501705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:30:28.504884 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:30:28.504953 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:30:28.507057 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:30:28.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.526909 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:30:28.527049 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:30:28.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.530369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:30:28.530463 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:30:28.534098 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:30:28.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.534310 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:30:28.540646 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:30:28.540735 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:30:28.542994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:30:28.543072 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:30:28.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.546581 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:30:28.546660 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:30:28.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.558049 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:30:28.558125 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:30:28.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.562827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:30:28.562886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:30:28.568793 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:30:28.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.571341 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:30:28.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.571417 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:30:28.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.575725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:30:28.575776 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:30:28.575879 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:30:28.575931 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:30:28.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.576471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:30:28.576518 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:30:28.583358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:30:28.583429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:30:28.587513 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:30:28.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:28.593885 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:30:28.606823 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:30:28.606939 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:30:28.610353 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:30:28.614152 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:30:28.648664 systemd[1]: Switching root. Dec 16 03:30:28.689843 systemd-journald[315]: Journal stopped Dec 16 03:30:31.355859 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Dec 16 03:30:31.355931 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:30:31.355947 kernel: SELinux: policy capability open_perms=1 Dec 16 03:30:31.355959 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:30:31.355975 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:30:31.355999 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:30:31.356017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:30:31.356030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:30:31.356042 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:30:31.356054 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:30:31.356066 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 03:30:31.356082 kernel: audit: type=1403 audit(1765855830.195:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 03:30:31.356099 systemd[1]: Successfully loaded SELinux policy in 70.416ms. Dec 16 03:30:31.356115 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.584ms. Dec 16 03:30:31.356129 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:30:31.356143 systemd[1]: Detected virtualization kvm. Dec 16 03:30:31.356159 systemd[1]: Detected architecture x86-64. Dec 16 03:30:31.356175 systemd[1]: Detected first boot. Dec 16 03:30:31.356190 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:30:31.356207 kernel: audit: type=1334 audit(1765855830.283:83): prog-id=10 op=LOAD Dec 16 03:30:31.356219 kernel: audit: type=1334 audit(1765855830.283:84): prog-id=10 op=UNLOAD Dec 16 03:30:31.356236 kernel: audit: type=1334 audit(1765855830.283:85): prog-id=11 op=LOAD Dec 16 03:30:31.356248 kernel: audit: type=1334 audit(1765855830.283:86): prog-id=11 op=UNLOAD Dec 16 03:30:31.356260 zram_generator::config[1145]: No configuration found. Dec 16 03:30:31.356274 kernel: Guest personality initialized and is inactive Dec 16 03:30:31.356288 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:30:31.356301 kernel: Initialized host personality Dec 16 03:30:31.356313 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:30:31.356326 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:30:31.356339 kernel: audit: type=1334 audit(1765855830.958:87): prog-id=12 op=LOAD Dec 16 03:30:31.356352 kernel: audit: type=1334 audit(1765855830.958:88): prog-id=3 op=UNLOAD Dec 16 03:30:31.356364 kernel: audit: type=1334 audit(1765855830.958:89): prog-id=13 op=LOAD Dec 16 03:30:31.356379 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:30:31.356489 kernel: audit: type=1334 audit(1765855830.958:90): prog-id=14 op=LOAD Dec 16 03:30:31.356508 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:30:31.356525 kernel: audit: type=1334 audit(1765855830.958:91): prog-id=4 op=UNLOAD Dec 16 03:30:31.356539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:30:31.356557 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:30:31.356571 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:30:31.356587 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:30:31.356600 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:30:31.356613 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:30:31.356628 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:30:31.356641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:30:31.356654 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:30:31.356670 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:30:31.356693 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:30:31.356715 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:30:31.356730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:30:31.356743 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:30:31.356756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:30:31.356769 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:30:31.356789 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:30:31.356804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:30:31.356819 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:30:31.356834 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:30:31.356847 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:30:31.356860 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:30:31.356874 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:30:31.356889 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:30:31.356902 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:30:31.356916 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:30:31.356928 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:30:31.356943 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:30:31.356956 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:30:31.356969 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:30:31.356992 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:30:31.357006 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:30:31.357019 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:30:31.357033 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:30:31.357046 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:30:31.357066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:30:31.357081 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:30:31.357094 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:30:31.357107 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:30:31.357121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:30:31.357134 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:30:31.357152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:31.357168 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:30:31.357181 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:30:31.357194 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:30:31.357208 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:30:31.357221 systemd[1]: Reached target machines.target - Containers. Dec 16 03:30:31.357235 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:30:31.357251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:30:31.357264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:30:31.357278 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:30:31.357291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:30:31.357305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:30:31.357318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:30:31.357332 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:30:31.357347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:30:31.357361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:30:31.357374 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:30:31.357401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:30:31.357415 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:30:31.357428 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:30:31.357443 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:30:31.357456 kernel: fuse: init (API version 7.41) Dec 16 03:30:31.357470 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:30:31.357483 kernel: ACPI: bus type drm_connector registered Dec 16 03:30:31.357498 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:30:31.357512 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:30:31.357546 systemd-journald[1207]: Collecting audit messages is enabled. Dec 16 03:30:31.357570 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:30:31.357585 systemd-journald[1207]: Journal started Dec 16 03:30:31.357610 systemd-journald[1207]: Runtime Journal (/run/log/journal/098e61f9c0eb4034b047e393d30d3b43) is 5.9M, max 47.8M, 41.8M free. Dec 16 03:30:31.134000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:30:31.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.284000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:30:31.284000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:30:31.285000 audit: BPF prog-id=15 op=LOAD Dec 16 03:30:31.286000 audit: BPF prog-id=16 op=LOAD Dec 16 03:30:31.286000 audit: BPF prog-id=17 op=LOAD Dec 16 03:30:31.353000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:30:31.353000 audit[1207]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffefa560b30 a2=4000 a3=0 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:31.353000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:30:30.937770 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:30:30.959207 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 03:30:30.959970 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:30:31.370632 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:30:31.387157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:30:31.387247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:31.395670 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:30:31.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.399715 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:30:31.401922 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:30:31.404264 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:30:31.406416 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:30:31.408637 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:30:31.410877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:30:31.413174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:30:31.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.415971 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:30:31.416223 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:30:31.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.418773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:30:31.419007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:30:31.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.421367 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:30:31.421714 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:30:31.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.424028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:30:31.424248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:30:31.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.426887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:30:31.427125 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:30:31.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.429533 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:30:31.429752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:30:31.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.432261 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:30:31.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.434980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:30:31.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.438694 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:30:31.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.441749 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:30:31.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.496513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:30:31.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.512020 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:30:31.514621 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:30:31.518336 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:30:31.521622 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:30:31.523835 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:30:31.523872 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:30:31.526900 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:30:31.529736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:30:31.529864 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:30:31.536570 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:30:31.540339 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:30:31.542772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:30:31.544003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:30:31.546274 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:30:31.547726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:30:31.550861 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:30:31.554560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:30:31.560938 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:30:31.563541 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:30:31.564280 systemd-journald[1207]: Time spent on flushing to /var/log/journal/098e61f9c0eb4034b047e393d30d3b43 is 23.508ms for 1170 entries. Dec 16 03:30:31.564280 systemd-journald[1207]: System Journal (/var/log/journal/098e61f9c0eb4034b047e393d30d3b43) is 8M, max 163.5M, 155.5M free. Dec 16 03:30:31.892473 systemd-journald[1207]: Received client request to flush runtime journal. Dec 16 03:30:31.892515 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:30:31.892530 kernel: loop2: detected capacity change from 0 to 224512 Dec 16 03:30:31.892545 kernel: loop3: detected capacity change from 0 to 50784 Dec 16 03:30:31.892564 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 03:30:31.892578 kernel: loop5: detected capacity change from 0 to 224512 Dec 16 03:30:31.892592 kernel: loop6: detected capacity change from 0 to 50784 Dec 16 03:30:31.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:31.590826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:30:31.604977 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Dec 16 03:30:31.605001 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Dec 16 03:30:31.610280 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:30:31.819222 (sd-merge)[1267]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 03:30:31.823591 (sd-merge)[1267]: Merged extensions into '/usr'. Dec 16 03:30:31.840119 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:30:31.840137 systemd[1]: Reloading... Dec 16 03:30:31.917434 zram_generator::config[1306]: No configuration found. Dec 16 03:30:32.143203 systemd[1]: Reloading finished in 302 ms. Dec 16 03:30:32.171673 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:30:32.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.191686 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:30:32.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.194195 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:30:32.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.196689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:30:32.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.205487 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:30:32.219510 systemd[1]: Starting ensure-sysext.service... Dec 16 03:30:32.222680 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:30:32.226692 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:30:32.231000 audit: BPF prog-id=18 op=LOAD Dec 16 03:30:32.231000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:30:32.231000 audit: BPF prog-id=19 op=LOAD Dec 16 03:30:32.231000 audit: BPF prog-id=20 op=LOAD Dec 16 03:30:32.232000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:30:32.232000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:30:32.238447 systemd[1]: Reload requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:30:32.238616 systemd[1]: Reloading... Dec 16 03:30:32.290455 zram_generator::config[1379]: No configuration found. Dec 16 03:30:32.493601 systemd[1]: Reloading finished in 254 ms. Dec 16 03:30:32.514000 audit: BPF prog-id=21 op=LOAD Dec 16 03:30:32.514000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:30:32.514000 audit: BPF prog-id=22 op=LOAD Dec 16 03:30:32.514000 audit: BPF prog-id=23 op=LOAD Dec 16 03:30:32.514000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:30:32.514000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:30:32.543113 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:32.543332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:30:32.545172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:30:32.548445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:30:32.562732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:30:32.564992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:30:32.565255 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:30:32.565371 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:30:32.565525 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:32.567019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:30:32.567269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:30:32.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.569981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:30:32.570255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:30:32.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.572900 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:30:32.573131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:30:32.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.580351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:32.580659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:30:32.582207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:30:32.585173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:30:32.597662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:30:32.599781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:30:32.600106 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:30:32.600254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:30:32.600442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:32.602058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:30:32.602364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:30:32.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.620523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:30:32.620877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:30:32.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.623641 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:30:32.623979 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:30:32.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.633630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:32.633970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:30:32.635907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:30:32.639363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:30:32.653079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:30:32.658000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:30:32.659805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:30:32.660070 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:30:32.660215 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:30:32.660488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:30:32.662224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:30:32.662601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:30:32.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.672992 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:30:32.673773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:30:32.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.676817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:30:32.677080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:30:32.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.680048 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:30:32.680352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:30:32.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.698040 systemd[1]: Finished ensure-sysext.service. Dec 16 03:30:32.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.704263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:30:32.704372 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:30:32.817853 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:30:32.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.855000 audit: BPF prog-id=24 op=LOAD Dec 16 03:30:32.855000 audit: BPF prog-id=25 op=LOAD Dec 16 03:30:32.855000 audit: BPF prog-id=26 op=LOAD Dec 16 03:30:32.856905 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:30:32.860000 audit: BPF prog-id=27 op=LOAD Dec 16 03:30:32.862089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:30:32.864000 audit: BPF prog-id=28 op=LOAD Dec 16 03:30:32.865999 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:30:32.869315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:30:32.872748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:30:32.876000 audit: BPF prog-id=29 op=LOAD Dec 16 03:30:32.889000 audit: BPF prog-id=30 op=LOAD Dec 16 03:30:32.889000 audit: BPF prog-id=31 op=LOAD Dec 16 03:30:32.890789 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:30:32.893000 audit: BPF prog-id=32 op=LOAD Dec 16 03:30:32.894000 audit: BPF prog-id=33 op=LOAD Dec 16 03:30:32.894000 audit: BPF prog-id=34 op=LOAD Dec 16 03:30:32.896091 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:30:32.905902 systemd-tmpfiles[1442]: ACLs are not supported, ignoring. Dec 16 03:30:32.905925 systemd-tmpfiles[1442]: ACLs are not supported, ignoring. Dec 16 03:30:32.912215 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:30:32.912258 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:30:32.912583 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:30:32.914147 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Dec 16 03:30:32.914249 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Dec 16 03:30:32.916440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:30:32.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.926977 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:30:32.926990 systemd-tmpfiles[1443]: Skipping /boot Dec 16 03:30:32.940339 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:30:32.940355 systemd-tmpfiles[1443]: Skipping /boot Dec 16 03:30:32.958109 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:30:32.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:32.969496 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:30:33.057102 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:30:33.059698 systemd-nsresourced[1444]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:30:33.065046 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:30:33.074081 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:30:33.109352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:30:33.126631 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:30:33.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.147699 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:30:33.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.175922 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:30:33.175000 audit[1475]: SYSTEM_BOOT pid=1475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.197118 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:30:33.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.308950 systemd-oomd[1439]: No swap; memory pressure usage will be degraded Dec 16 03:30:33.311428 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:30:33.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.318964 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:30:33.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.347000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:30:33.347000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:30:33.348000 audit: BPF prog-id=35 op=LOAD Dec 16 03:30:33.348000 audit: BPF prog-id=36 op=LOAD Dec 16 03:30:33.321859 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:30:33.345302 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:30:33.349266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:30:33.400370 systemd-udevd[1491]: Using default interface naming scheme 'v257'. Dec 16 03:30:33.515468 systemd-resolved[1440]: Positive Trust Anchors: Dec 16 03:30:33.515489 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:30:33.515495 systemd-resolved[1440]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:30:33.515538 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:30:33.526426 systemd-resolved[1440]: Defaulting to hostname 'linux'. Dec 16 03:30:33.527995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:30:33.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.530925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:30:33.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:30:33.534234 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:30:33.538000 audit: BPF prog-id=37 op=LOAD Dec 16 03:30:33.541587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:30:33.541000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:30:33.541000 audit[1500]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd49aaf7f0 a2=420 a3=0 items=0 ppid=1450 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.541000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:30:33.542730 augenrules[1500]: No rules Dec 16 03:30:33.545231 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:30:33.547170 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:30:33.550211 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:30:33.550612 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:30:33.561949 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:30:33.566878 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:30:33.636370 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:30:33.680423 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:30:33.694954 systemd-networkd[1505]: lo: Link UP Dec 16 03:30:33.694965 systemd-networkd[1505]: lo: Gained carrier Dec 16 03:30:33.698968 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:30:33.699160 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:30:33.699654 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:30:33.701781 systemd[1]: Reached target network.target - Network. Dec 16 03:30:33.702833 systemd-networkd[1505]: eth0: Link UP Dec 16 03:30:33.704085 systemd-networkd[1505]: eth0: Gained carrier Dec 16 03:30:33.705214 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:30:33.706692 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:30:34.020082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:30:34.021416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 03:30:34.037449 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:30:34.038901 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 03:30:34.042053 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Dec 16 03:30:34.900904 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 03:30:34.901073 systemd-timesyncd[1441]: Initial clock synchronization to Tue 2025-12-16 03:30:34.900701 UTC. Dec 16 03:30:34.901111 systemd-resolved[1440]: Clock change detected. Flushing caches. Dec 16 03:30:34.908932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:30:34.919079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:30:34.945513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:30:34.957032 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:30:34.959272 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 03:30:34.959901 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 03:30:34.965880 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:30:35.155819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:30:35.255515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:30:35.275238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:30:35.280967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:30:35.441773 kernel: kvm_amd: TSC scaling supported Dec 16 03:30:35.441930 kernel: kvm_amd: Nested Virtualization enabled Dec 16 03:30:35.441981 kernel: kvm_amd: Nested Paging enabled Dec 16 03:30:35.442852 kernel: kvm_amd: LBR virtualization supported Dec 16 03:30:35.443671 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 03:30:35.444845 kernel: kvm_amd: Virtual GIF supported Dec 16 03:30:35.469141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:30:35.493022 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:30:35.648066 ldconfig[1462]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:30:35.986717 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:30:35.990565 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:30:36.037778 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:30:36.040368 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:30:36.042548 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:30:36.045018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:30:36.047515 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:30:36.049848 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:30:36.051968 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:30:36.054181 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:30:36.056542 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:30:36.058507 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:30:36.060659 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:30:36.060703 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:30:36.062315 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:30:36.065063 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:30:36.069129 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:30:36.074126 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:30:36.076382 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:30:36.078476 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:30:36.083821 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:30:36.086020 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:30:36.088728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:30:36.091408 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:30:36.093074 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:30:36.094737 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:30:36.094770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:30:36.096031 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:30:36.098999 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:30:36.101552 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:30:36.102756 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:30:36.123092 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:30:36.149914 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:30:36.150147 jq[1572]: false Dec 16 03:30:36.151585 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:30:36.154843 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:30:36.159040 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:30:36.164096 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:30:36.165807 extend-filesystems[1573]: Found /dev/vda6 Dec 16 03:30:36.168534 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:30:36.171645 extend-filesystems[1573]: Found /dev/vda9 Dec 16 03:30:36.175090 extend-filesystems[1573]: Checking size of /dev/vda9 Dec 16 03:30:36.182189 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing passwd entry cache Dec 16 03:30:36.181264 oslogin_cache_refresh[1574]: Refreshing passwd entry cache Dec 16 03:30:36.184601 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:30:36.186654 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:30:36.187637 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:30:36.188528 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:30:36.190554 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting users, quitting Dec 16 03:30:36.190554 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:30:36.190554 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing group entry cache Dec 16 03:30:36.190151 oslogin_cache_refresh[1574]: Failure getting users, quitting Dec 16 03:30:36.190167 oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:30:36.190214 oslogin_cache_refresh[1574]: Refreshing group entry cache Dec 16 03:30:36.192097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:30:36.198524 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting groups, quitting Dec 16 03:30:36.198524 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:30:36.198061 oslogin_cache_refresh[1574]: Failure getting groups, quitting Dec 16 03:30:36.198075 oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:30:36.200581 jq[1595]: true Dec 16 03:30:36.199692 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:30:36.202221 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:30:36.202719 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:30:36.203143 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:30:36.203663 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:30:36.206132 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:30:36.206504 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:30:36.210323 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:30:36.210632 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:30:36.211382 update_engine[1593]: I20251216 03:30:36.210779 1593 main.cc:92] Flatcar Update Engine starting Dec 16 03:30:36.229595 extend-filesystems[1573]: Resized partition /dev/vda9 Dec 16 03:30:36.234769 jq[1602]: true Dec 16 03:30:36.255117 systemd-networkd[1505]: eth0: Gained IPv6LL Dec 16 03:30:36.265547 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:30:36.314489 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:30:36.318091 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 03:30:36.366857 extend-filesystems[1637]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:30:36.370855 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:30:36.371288 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:30:36.371836 systemd-logind[1591]: New seat seat0. Dec 16 03:30:36.388676 dbus-daemon[1570]: [system] SELinux support is enabled Dec 16 03:30:36.393751 update_engine[1593]: I20251216 03:30:36.393689 1593 update_check_scheduler.cc:74] Next update check in 6m49s Dec 16 03:30:36.427165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:30:36.432120 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:30:36.434962 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:30:36.437608 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:30:36.467006 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 03:30:36.485756 dbus-daemon[1570]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 03:30:36.499699 tar[1599]: linux-amd64/LICENSE Dec 16 03:30:36.499699 tar[1599]: linux-amd64/helm Dec 16 03:30:36.496774 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 03:30:36.497401 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 03:30:36.515979 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 03:30:36.511500 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:30:36.521459 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:30:36.521755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:30:36.521910 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:30:36.526153 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:30:36.526322 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:30:36.532504 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:30:36.553254 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:30:36.629657 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:30:36.631862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:30:36.634401 extend-filesystems[1637]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 03:30:36.634401 extend-filesystems[1637]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 03:30:36.634401 extend-filesystems[1637]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 03:30:36.644387 extend-filesystems[1573]: Resized filesystem in /dev/vda9 Dec 16 03:30:36.647276 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:30:36.647746 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:30:36.653455 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 03:30:36.666651 locksmithd[1658]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:30:36.859450 containerd[1605]: time="2025-12-16T03:30:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:30:36.860480 containerd[1605]: time="2025-12-16T03:30:36.860332232Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:30:36.869825 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:30:36.876904 containerd[1605]: time="2025-12-16T03:30:36.876762465Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.326µs" Dec 16 03:30:36.876904 containerd[1605]: time="2025-12-16T03:30:36.876818480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:30:36.876904 containerd[1605]: time="2025-12-16T03:30:36.876877350Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:30:36.876904 containerd[1605]: time="2025-12-16T03:30:36.876898981Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:30:36.879719 containerd[1605]: time="2025-12-16T03:30:36.879242275Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:30:36.879719 containerd[1605]: time="2025-12-16T03:30:36.879279996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879719 containerd[1605]: time="2025-12-16T03:30:36.879361359Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879719 containerd[1605]: time="2025-12-16T03:30:36.879373932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879719 containerd[1605]: time="2025-12-16T03:30:36.879677111Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879874 containerd[1605]: time="2025-12-16T03:30:36.879720502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879874 containerd[1605]: time="2025-12-16T03:30:36.879738326Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879874 containerd[1605]: time="2025-12-16T03:30:36.879751009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879979 containerd[1605]: time="2025-12-16T03:30:36.879959751Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.879979 containerd[1605]: time="2025-12-16T03:30:36.879977143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:30:36.881167 containerd[1605]: time="2025-12-16T03:30:36.880981176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.881459 containerd[1605]: time="2025-12-16T03:30:36.881308500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.881459 containerd[1605]: time="2025-12-16T03:30:36.881363594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:30:36.881459 containerd[1605]: time="2025-12-16T03:30:36.881377730Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:30:36.881459 containerd[1605]: time="2025-12-16T03:30:36.881448122Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:30:36.883912 containerd[1605]: time="2025-12-16T03:30:36.883885673Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:30:36.884265 containerd[1605]: time="2025-12-16T03:30:36.884247181Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:30:36.923874 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:30:36.937213 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:30:36.959712 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:30:36.960078 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:30:36.965399 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:30:37.046521 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:30:37.051909 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:30:37.056345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:30:37.058491 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:30:37.306866 tar[1599]: linux-amd64/README.md Dec 16 03:30:37.333526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:30:37.373458 containerd[1605]: time="2025-12-16T03:30:37.373351386Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:30:37.373600 containerd[1605]: time="2025-12-16T03:30:37.373502740Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:30:37.373663 containerd[1605]: time="2025-12-16T03:30:37.373630410Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:30:37.373847 containerd[1605]: time="2025-12-16T03:30:37.373801631Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:30:37.373847 containerd[1605]: time="2025-12-16T03:30:37.373832048Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:30:37.373847 containerd[1605]: time="2025-12-16T03:30:37.373847056Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373858207Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373867965Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373879366Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373892010Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373926785Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373938678Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:30:37.373971 containerd[1605]: time="2025-12-16T03:30:37.373967101Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:30:37.374190 containerd[1605]: time="2025-12-16T03:30:37.373982480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:30:37.374190 containerd[1605]: time="2025-12-16T03:30:37.374134545Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:30:37.374190 containerd[1605]: time="2025-12-16T03:30:37.374159482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:30:37.374190 containerd[1605]: time="2025-12-16T03:30:37.374174390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:30:37.374190 containerd[1605]: time="2025-12-16T03:30:37.374184509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:30:37.374190 containerd[1605]: time="2025-12-16T03:30:37.374194718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374205518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374222851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374237288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374266553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374278285Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374302069Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:30:37.374373 containerd[1605]: time="2025-12-16T03:30:37.374334730Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:30:37.374595 containerd[1605]: time="2025-12-16T03:30:37.374403950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:30:37.374595 containerd[1605]: time="2025-12-16T03:30:37.374426783Z" level=info msg="Start snapshots syncer" Dec 16 03:30:37.374595 containerd[1605]: time="2025-12-16T03:30:37.374457741Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:30:37.374830 containerd[1605]: time="2025-12-16T03:30:37.374784694Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.374850588Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.374913816Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.375072945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.375093644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.375105857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.375115495Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:30:37.375127 containerd[1605]: time="2025-12-16T03:30:37.375129461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375139199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375153316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375165168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375176409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375206215Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375217446Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375226643Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375235450Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375243074Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375253503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375263362Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375320669Z" level=info msg="runtime interface created" Dec 16 03:30:37.375322 containerd[1605]: time="2025-12-16T03:30:37.375328895Z" level=info msg="created NRI interface" Dec 16 03:30:37.375665 containerd[1605]: time="2025-12-16T03:30:37.375347069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:30:37.375665 containerd[1605]: time="2025-12-16T03:30:37.375360003Z" level=info msg="Connect containerd service" Dec 16 03:30:37.375665 containerd[1605]: time="2025-12-16T03:30:37.375384589Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:30:37.376371 containerd[1605]: time="2025-12-16T03:30:37.376325995Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:30:37.567831 containerd[1605]: time="2025-12-16T03:30:37.567704788Z" level=info msg="Start subscribing containerd event" Dec 16 03:30:37.567831 containerd[1605]: time="2025-12-16T03:30:37.567775160Z" level=info msg="Start recovering state" Dec 16 03:30:37.567961 containerd[1605]: time="2025-12-16T03:30:37.567907178Z" level=info msg="Start event monitor" Dec 16 03:30:37.567961 containerd[1605]: time="2025-12-16T03:30:37.567922887Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:30:37.568020 containerd[1605]: time="2025-12-16T03:30:37.567962451Z" level=info msg="Start streaming server" Dec 16 03:30:37.568020 containerd[1605]: time="2025-12-16T03:30:37.567995033Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:30:37.568020 containerd[1605]: time="2025-12-16T03:30:37.568004380Z" level=info msg="runtime interface starting up..." Dec 16 03:30:37.568020 containerd[1605]: time="2025-12-16T03:30:37.568012525Z" level=info msg="starting plugins..." Dec 16 03:30:37.568091 containerd[1605]: time="2025-12-16T03:30:37.568038023Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:30:37.569016 containerd[1605]: time="2025-12-16T03:30:37.568936067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:30:37.569124 containerd[1605]: time="2025-12-16T03:30:37.569043158Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:30:37.571032 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:30:37.613569 containerd[1605]: time="2025-12-16T03:30:37.571178853Z" level=info msg="containerd successfully booted in 0.712650s" Dec 16 03:30:38.235188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:30:38.251339 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:30:38.252189 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:30:38.253736 systemd[1]: Startup finished in 3.452s (kernel) + 8.263s (initrd) + 7.270s (userspace) = 18.986s. Dec 16 03:30:39.062854 kubelet[1709]: E1216 03:30:39.062788 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:30:39.067334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:30:39.067544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:30:39.068010 systemd[1]: kubelet.service: Consumed 2.043s CPU time, 265.7M memory peak. Dec 16 03:30:45.612285 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:30:45.613886 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:38958.service - OpenSSH per-connection server daemon (10.0.0.1:38958). Dec 16 03:30:45.714364 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 38958 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:30:45.717258 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:30:45.726036 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:30:45.727459 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:30:45.733000 systemd-logind[1591]: New session 1 of user core. Dec 16 03:30:45.752433 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:30:45.755641 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:30:45.775155 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:30:45.778815 systemd-logind[1591]: New session 2 of user core. Dec 16 03:30:45.945996 systemd[1728]: Queued start job for default target default.target. Dec 16 03:30:45.965891 systemd[1728]: Created slice app.slice - User Application Slice. Dec 16 03:30:45.965927 systemd[1728]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:30:45.965959 systemd[1728]: Reached target paths.target - Paths. Dec 16 03:30:45.966023 systemd[1728]: Reached target timers.target - Timers. Dec 16 03:30:45.967727 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:30:45.968775 systemd[1728]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:30:45.982499 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:30:45.982708 systemd[1728]: Reached target sockets.target - Sockets. Dec 16 03:30:45.983291 systemd[1728]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:30:45.983533 systemd[1728]: Reached target basic.target - Basic System. Dec 16 03:30:45.983594 systemd[1728]: Reached target default.target - Main User Target. Dec 16 03:30:45.983628 systemd[1728]: Startup finished in 197ms. Dec 16 03:30:45.984021 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:30:45.985899 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:30:46.014566 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:38972.service - OpenSSH per-connection server daemon (10.0.0.1:38972). Dec 16 03:30:46.068153 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 38972 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:30:46.069909 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:30:46.074572 systemd-logind[1591]: New session 3 of user core. Dec 16 03:30:46.088136 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:30:46.103379 sshd[1746]: Connection closed by 10.0.0.1 port 38972 Dec 16 03:30:46.103713 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 16 03:30:46.117438 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:38972.service: Deactivated successfully. Dec 16 03:30:46.119673 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:30:46.120618 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:30:46.123912 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:38988.service - OpenSSH per-connection server daemon (10.0.0.1:38988). Dec 16 03:30:46.124840 systemd-logind[1591]: Removed session 3. Dec 16 03:30:46.191703 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:30:46.193479 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:30:46.198070 systemd-logind[1591]: New session 4 of user core. Dec 16 03:30:46.209091 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:30:46.218650 sshd[1756]: Connection closed by 10.0.0.1 port 38988 Dec 16 03:30:46.219003 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 16 03:30:46.227812 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:38988.service: Deactivated successfully. Dec 16 03:30:46.229751 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:30:46.230673 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:30:46.233734 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:39000.service - OpenSSH per-connection server daemon (10.0.0.1:39000). Dec 16 03:30:46.234366 systemd-logind[1591]: Removed session 4. Dec 16 03:30:46.288454 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 39000 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:30:46.290599 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:30:46.295815 systemd-logind[1591]: New session 5 of user core. Dec 16 03:30:46.309250 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:30:46.323381 sshd[1766]: Connection closed by 10.0.0.1 port 39000 Dec 16 03:30:46.323707 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Dec 16 03:30:46.336433 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:39000.service: Deactivated successfully. Dec 16 03:30:46.338895 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:30:46.339836 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:30:46.343397 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:39002.service - OpenSSH per-connection server daemon (10.0.0.1:39002). Dec 16 03:30:46.344260 systemd-logind[1591]: Removed session 5. Dec 16 03:30:46.408963 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 39002 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:30:46.410680 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:30:46.415562 systemd-logind[1591]: New session 6 of user core. Dec 16 03:30:46.425144 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:30:46.448453 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:30:46.448860 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:30:47.108986 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:30:47.133383 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:30:47.585302 dockerd[1800]: time="2025-12-16T03:30:47.585217230Z" level=info msg="Starting up" Dec 16 03:30:47.586012 dockerd[1800]: time="2025-12-16T03:30:47.585976143Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:30:47.602490 dockerd[1800]: time="2025-12-16T03:30:47.602431162Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:30:47.944275 dockerd[1800]: time="2025-12-16T03:30:47.943808366Z" level=info msg="Loading containers: start." Dec 16 03:30:47.955989 kernel: Initializing XFRM netlink socket Dec 16 03:30:49.290418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:30:49.292118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:30:49.407902 systemd-networkd[1505]: docker0: Link UP Dec 16 03:30:50.017312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:30:50.021990 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:30:50.071979 kubelet[1986]: E1216 03:30:50.071900 1986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:30:50.078493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:30:50.078689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:30:50.079146 systemd[1]: kubelet.service: Consumed 293ms CPU time, 111.7M memory peak. Dec 16 03:30:51.303828 dockerd[1800]: time="2025-12-16T03:30:51.303742222Z" level=info msg="Loading containers: done." Dec 16 03:30:51.968705 dockerd[1800]: time="2025-12-16T03:30:51.968616939Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:30:51.968931 dockerd[1800]: time="2025-12-16T03:30:51.968786888Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:30:51.969000 dockerd[1800]: time="2025-12-16T03:30:51.968935567Z" level=info msg="Initializing buildkit" Dec 16 03:30:52.488760 dockerd[1800]: time="2025-12-16T03:30:52.488350645Z" level=info msg="Completed buildkit initialization" Dec 16 03:30:52.512052 dockerd[1800]: time="2025-12-16T03:30:52.511938210Z" level=info msg="Daemon has completed initialization" Dec 16 03:30:52.512440 dockerd[1800]: time="2025-12-16T03:30:52.512086268Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:30:52.512450 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:30:53.512633 containerd[1605]: time="2025-12-16T03:30:53.512570723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 03:30:55.897588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084973366.mount: Deactivated successfully. Dec 16 03:31:00.300136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:31:00.317061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:00.982850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:01.004734 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:31:01.315645 kubelet[2103]: E1216 03:31:01.315370 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:31:01.319724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:31:01.319974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:31:01.320434 systemd[1]: kubelet.service: Consumed 508ms CPU time, 110.8M memory peak. Dec 16 03:31:03.544427 containerd[1605]: time="2025-12-16T03:31:03.544322960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:03.635702 containerd[1605]: time="2025-12-16T03:31:03.635597282Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=28928582" Dec 16 03:31:03.807246 containerd[1605]: time="2025-12-16T03:31:03.804568142Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:03.847471 containerd[1605]: time="2025-12-16T03:31:03.842012895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:03.847471 containerd[1605]: time="2025-12-16T03:31:03.843355512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 10.330724035s" Dec 16 03:31:03.847471 containerd[1605]: time="2025-12-16T03:31:03.843400878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 03:31:03.847471 containerd[1605]: time="2025-12-16T03:31:03.846577254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 03:31:10.674278 containerd[1605]: time="2025-12-16T03:31:10.674155273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:10.685644 containerd[1605]: time="2025-12-16T03:31:10.685545412Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24985218" Dec 16 03:31:10.698977 containerd[1605]: time="2025-12-16T03:31:10.697609159Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:10.702260 containerd[1605]: time="2025-12-16T03:31:10.702186394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:10.703014 containerd[1605]: time="2025-12-16T03:31:10.702959435Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 6.85631938s" Dec 16 03:31:10.703085 containerd[1605]: time="2025-12-16T03:31:10.703022527Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 03:31:10.704501 containerd[1605]: time="2025-12-16T03:31:10.704272263Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 03:31:11.551131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 03:31:11.558291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:12.042186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:12.074658 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:31:12.199707 kubelet[2124]: E1216 03:31:12.199596 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:31:12.206529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:31:12.206736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:31:12.207291 systemd[1]: kubelet.service: Consumed 463ms CPU time, 110.5M memory peak. Dec 16 03:31:15.313670 containerd[1605]: time="2025-12-16T03:31:15.313559599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:15.319378 containerd[1605]: time="2025-12-16T03:31:15.319299016Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 03:31:15.335930 containerd[1605]: time="2025-12-16T03:31:15.335806829Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:15.359815 containerd[1605]: time="2025-12-16T03:31:15.359710926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:15.361863 containerd[1605]: time="2025-12-16T03:31:15.361762671Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 4.657431905s" Dec 16 03:31:15.361863 containerd[1605]: time="2025-12-16T03:31:15.361815332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 03:31:15.362680 containerd[1605]: time="2025-12-16T03:31:15.362595593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 03:31:18.935918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353053476.mount: Deactivated successfully. Dec 16 03:31:20.340652 containerd[1605]: time="2025-12-16T03:31:20.340569473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:20.350105 containerd[1605]: time="2025-12-16T03:31:20.350029681Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 16 03:31:20.372755 containerd[1605]: time="2025-12-16T03:31:20.372679098Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:20.394421 containerd[1605]: time="2025-12-16T03:31:20.394364053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:20.395120 containerd[1605]: time="2025-12-16T03:31:20.395088717Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 5.032399494s" Dec 16 03:31:20.395120 containerd[1605]: time="2025-12-16T03:31:20.395119876Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 03:31:20.395658 containerd[1605]: time="2025-12-16T03:31:20.395618009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 03:31:21.273299 update_engine[1593]: I20251216 03:31:21.273186 1593 update_attempter.cc:509] Updating boot flags... Dec 16 03:31:22.290361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 03:31:22.292312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:22.516443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:22.538340 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:31:22.581503 kubelet[2171]: E1216 03:31:22.581329 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:31:22.585205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:31:22.585416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:31:22.585922 systemd[1]: kubelet.service: Consumed 224ms CPU time, 110.2M memory peak. Dec 16 03:31:26.112036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280079104.mount: Deactivated successfully. Dec 16 03:31:27.011682 containerd[1605]: time="2025-12-16T03:31:27.011592710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:27.012666 containerd[1605]: time="2025-12-16T03:31:27.012612855Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18556565" Dec 16 03:31:27.015163 containerd[1605]: time="2025-12-16T03:31:27.014218252Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:27.021279 containerd[1605]: time="2025-12-16T03:31:27.021188386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:27.022226 containerd[1605]: time="2025-12-16T03:31:27.022178855Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 6.626518496s" Dec 16 03:31:27.022226 containerd[1605]: time="2025-12-16T03:31:27.022225794Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 03:31:27.022848 containerd[1605]: time="2025-12-16T03:31:27.022812067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:31:28.215698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273644659.mount: Deactivated successfully. Dec 16 03:31:28.224489 containerd[1605]: time="2025-12-16T03:31:28.224396061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:31:28.225516 containerd[1605]: time="2025-12-16T03:31:28.225430592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:31:28.227173 containerd[1605]: time="2025-12-16T03:31:28.227096320Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:31:28.229649 containerd[1605]: time="2025-12-16T03:31:28.229599235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:31:28.230328 containerd[1605]: time="2025-12-16T03:31:28.230269777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.20742545s" Dec 16 03:31:28.230328 containerd[1605]: time="2025-12-16T03:31:28.230315764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:31:28.230842 containerd[1605]: time="2025-12-16T03:31:28.230811784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 03:31:30.909531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154780310.mount: Deactivated successfully. Dec 16 03:31:32.790674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 03:31:32.792630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:34.167108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:34.189367 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:31:34.777403 kubelet[2295]: E1216 03:31:34.777318 2295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:31:34.781756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:31:34.782017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:31:34.782537 systemd[1]: kubelet.service: Consumed 296ms CPU time, 110.9M memory peak. Dec 16 03:31:35.125496 containerd[1605]: time="2025-12-16T03:31:35.125370507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:35.126586 containerd[1605]: time="2025-12-16T03:31:35.126537029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55835644" Dec 16 03:31:35.128011 containerd[1605]: time="2025-12-16T03:31:35.127967581Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:35.130892 containerd[1605]: time="2025-12-16T03:31:35.130800981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:35.132051 containerd[1605]: time="2025-12-16T03:31:35.132017379Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.901173243s" Dec 16 03:31:35.132100 containerd[1605]: time="2025-12-16T03:31:35.132053688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 03:31:37.672065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:37.672241 systemd[1]: kubelet.service: Consumed 296ms CPU time, 110.9M memory peak. Dec 16 03:31:37.674674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:37.705846 systemd[1]: Reload requested from client PID 2335 ('systemctl') (unit session-6.scope)... Dec 16 03:31:37.705871 systemd[1]: Reloading... Dec 16 03:31:37.792974 zram_generator::config[2382]: No configuration found. Dec 16 03:31:38.463186 systemd[1]: Reloading finished in 756 ms. Dec 16 03:31:38.544013 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:31:38.544129 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:31:38.544561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:38.544630 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.5M memory peak. Dec 16 03:31:38.546641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:38.752393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:38.771482 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:31:38.812445 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:31:38.812445 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:31:38.812445 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:31:38.812897 kubelet[2429]: I1216 03:31:38.812530 2429 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:31:39.098332 kubelet[2429]: I1216 03:31:39.098277 2429 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:31:39.098332 kubelet[2429]: I1216 03:31:39.098313 2429 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:31:39.098604 kubelet[2429]: I1216 03:31:39.098580 2429 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:31:39.166642 kubelet[2429]: E1216 03:31:39.166560 2429 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:39.180013 kubelet[2429]: I1216 03:31:39.179961 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:31:39.206601 kubelet[2429]: I1216 03:31:39.206555 2429 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:31:39.211833 kubelet[2429]: I1216 03:31:39.211781 2429 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:31:39.223654 kubelet[2429]: I1216 03:31:39.223588 2429 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:31:39.223871 kubelet[2429]: I1216 03:31:39.223639 2429 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:31:39.224016 kubelet[2429]: I1216 03:31:39.223874 2429 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:31:39.224016 kubelet[2429]: I1216 03:31:39.223885 2429 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:31:39.224096 kubelet[2429]: I1216 03:31:39.224075 2429 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:31:39.229452 kubelet[2429]: I1216 03:31:39.229395 2429 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:31:39.229452 kubelet[2429]: I1216 03:31:39.229458 2429 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:31:39.229682 kubelet[2429]: I1216 03:31:39.229498 2429 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:31:39.229682 kubelet[2429]: I1216 03:31:39.229515 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:31:39.232877 kubelet[2429]: W1216 03:31:39.232770 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:39.232877 kubelet[2429]: W1216 03:31:39.232838 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:39.233007 kubelet[2429]: E1216 03:31:39.232868 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:39.233007 kubelet[2429]: E1216 03:31:39.232925 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:39.245285 kubelet[2429]: I1216 03:31:39.245250 2429 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:31:39.245858 kubelet[2429]: I1216 03:31:39.245822 2429 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:31:39.246481 kubelet[2429]: W1216 03:31:39.246452 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:31:39.250155 kubelet[2429]: I1216 03:31:39.250126 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:31:39.250315 kubelet[2429]: I1216 03:31:39.250180 2429 server.go:1287] "Started kubelet" Dec 16 03:31:39.250520 kubelet[2429]: I1216 03:31:39.250454 2429 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:31:39.250520 kubelet[2429]: I1216 03:31:39.250473 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:31:39.251248 kubelet[2429]: I1216 03:31:39.250988 2429 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:31:39.251554 kubelet[2429]: I1216 03:31:39.251522 2429 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:31:39.252887 kubelet[2429]: I1216 03:31:39.252857 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:31:39.252954 kubelet[2429]: I1216 03:31:39.252913 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:31:39.265576 kubelet[2429]: I1216 03:31:39.265538 2429 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:31:39.265756 kubelet[2429]: I1216 03:31:39.265688 2429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:31:39.266927 kubelet[2429]: E1216 03:31:39.266897 2429 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:31:39.267235 kubelet[2429]: I1216 03:31:39.267206 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:31:39.267394 kubelet[2429]: I1216 03:31:39.267360 2429 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:31:39.267394 kubelet[2429]: I1216 03:31:39.267381 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:31:39.267514 kubelet[2429]: I1216 03:31:39.267492 2429 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:31:39.268136 kubelet[2429]: W1216 03:31:39.268086 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:39.268238 kubelet[2429]: E1216 03:31:39.268166 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:39.268516 kubelet[2429]: E1216 03:31:39.268481 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.268631 kubelet[2429]: E1216 03:31:39.268600 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Dec 16 03:31:39.276690 kubelet[2429]: E1216 03:31:39.275003 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188194959f5539a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 03:31:39.250145705 +0000 UTC m=+0.474482559,LastTimestamp:2025-12-16 03:31:39.250145705 +0000 UTC m=+0.474482559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 03:31:39.287075 kubelet[2429]: I1216 03:31:39.287044 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:31:39.287075 kubelet[2429]: I1216 03:31:39.287062 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:31:39.287075 kubelet[2429]: I1216 03:31:39.287085 2429 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:31:39.293744 kubelet[2429]: I1216 03:31:39.293689 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:31:39.295072 kubelet[2429]: I1216 03:31:39.295043 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:31:39.295136 kubelet[2429]: I1216 03:31:39.295082 2429 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:31:39.295136 kubelet[2429]: I1216 03:31:39.295118 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:31:39.295136 kubelet[2429]: I1216 03:31:39.295133 2429 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:31:39.295256 kubelet[2429]: E1216 03:31:39.295214 2429 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:31:39.295811 kubelet[2429]: W1216 03:31:39.295764 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:39.295857 kubelet[2429]: E1216 03:31:39.295823 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:39.368939 kubelet[2429]: E1216 03:31:39.368778 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.396168 kubelet[2429]: E1216 03:31:39.396111 2429 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:31:39.469590 kubelet[2429]: E1216 03:31:39.469537 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.469926 kubelet[2429]: E1216 03:31:39.469888 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Dec 16 03:31:39.570447 kubelet[2429]: E1216 03:31:39.570393 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.596876 kubelet[2429]: E1216 03:31:39.596812 2429 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:31:39.671626 kubelet[2429]: E1216 03:31:39.671491 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.771696 kubelet[2429]: E1216 03:31:39.771637 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.870657 kubelet[2429]: E1216 03:31:39.870604 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Dec 16 03:31:39.872682 kubelet[2429]: E1216 03:31:39.872652 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.973729 kubelet[2429]: E1216 03:31:39.973577 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:39.998001 kubelet[2429]: E1216 03:31:39.997924 2429 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:31:40.074763 kubelet[2429]: E1216 03:31:40.074683 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:40.175866 kubelet[2429]: E1216 03:31:40.175756 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:40.179767 kubelet[2429]: I1216 03:31:40.179695 2429 policy_none.go:49] "None policy: Start" Dec 16 03:31:40.179767 kubelet[2429]: I1216 03:31:40.179752 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:31:40.179767 kubelet[2429]: I1216 03:31:40.179770 2429 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:31:40.185449 kubelet[2429]: W1216 03:31:40.185366 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:40.185449 kubelet[2429]: E1216 03:31:40.185448 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:40.190322 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:31:40.213705 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:31:40.217164 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:31:40.232075 kubelet[2429]: I1216 03:31:40.231930 2429 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:31:40.232560 kubelet[2429]: I1216 03:31:40.232246 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:31:40.232560 kubelet[2429]: I1216 03:31:40.232270 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:31:40.232651 kubelet[2429]: I1216 03:31:40.232589 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:31:40.233740 kubelet[2429]: E1216 03:31:40.233701 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:31:40.233839 kubelet[2429]: E1216 03:31:40.233783 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 03:31:40.333979 kubelet[2429]: I1216 03:31:40.333916 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:31:40.334347 kubelet[2429]: E1216 03:31:40.334316 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Dec 16 03:31:40.370345 kubelet[2429]: W1216 03:31:40.370305 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:40.370481 kubelet[2429]: E1216 03:31:40.370354 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:40.523356 kubelet[2429]: W1216 03:31:40.523192 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:40.523356 kubelet[2429]: E1216 03:31:40.523260 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:40.535688 kubelet[2429]: I1216 03:31:40.535650 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:31:40.536030 kubelet[2429]: E1216 03:31:40.536007 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Dec 16 03:31:40.543480 kubelet[2429]: W1216 03:31:40.543407 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Dec 16 03:31:40.543569 kubelet[2429]: E1216 03:31:40.543489 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:40.671234 kubelet[2429]: E1216 03:31:40.671167 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Dec 16 03:31:40.808623 systemd[1]: Created slice kubepods-burstable-podeabbbca885a25eb5f2f843525a74f081.slice - libcontainer container kubepods-burstable-podeabbbca885a25eb5f2f843525a74f081.slice. Dec 16 03:31:40.824394 kubelet[2429]: E1216 03:31:40.824361 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:31:40.828763 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 16 03:31:40.831066 kubelet[2429]: E1216 03:31:40.831027 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:31:40.841249 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 16 03:31:40.843185 kubelet[2429]: E1216 03:31:40.843148 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:31:40.875737 kubelet[2429]: I1216 03:31:40.875669 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:40.875737 kubelet[2429]: I1216 03:31:40.875727 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:40.875737 kubelet[2429]: I1216 03:31:40.875749 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:40.876291 kubelet[2429]: I1216 03:31:40.875770 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:40.876291 kubelet[2429]: I1216 03:31:40.875803 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eabbbca885a25eb5f2f843525a74f081-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eabbbca885a25eb5f2f843525a74f081\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:40.876291 kubelet[2429]: I1216 03:31:40.875819 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eabbbca885a25eb5f2f843525a74f081-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eabbbca885a25eb5f2f843525a74f081\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:40.876291 kubelet[2429]: I1216 03:31:40.875837 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eabbbca885a25eb5f2f843525a74f081-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eabbbca885a25eb5f2f843525a74f081\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:40.876291 kubelet[2429]: I1216 03:31:40.875871 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:40.876399 kubelet[2429]: I1216 03:31:40.875900 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:40.938069 kubelet[2429]: I1216 03:31:40.938029 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:31:40.938407 kubelet[2429]: E1216 03:31:40.938366 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Dec 16 03:31:41.125538 kubelet[2429]: E1216 03:31:41.125364 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:41.126316 containerd[1605]: time="2025-12-16T03:31:41.126263307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eabbbca885a25eb5f2f843525a74f081,Namespace:kube-system,Attempt:0,}" Dec 16 03:31:41.131533 kubelet[2429]: E1216 03:31:41.131494 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:41.131990 containerd[1605]: time="2025-12-16T03:31:41.131968417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 16 03:31:41.144560 kubelet[2429]: E1216 03:31:41.144502 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:41.145249 containerd[1605]: time="2025-12-16T03:31:41.145197364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 16 03:31:41.198337 containerd[1605]: time="2025-12-16T03:31:41.198274535Z" level=info msg="connecting to shim 651e67a694e93a713b7f2c492a2ff5b07f29f7908b1eac8033b11c48ced8b5c4" address="unix:///run/containerd/s/340b388385ca3a9a16bdcae5451c388de0f1f5cf15f18ccaba3202e7756fb886" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:31:41.220519 kubelet[2429]: E1216 03:31:41.220452 2429 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:31:41.226173 systemd[1]: Started cri-containerd-651e67a694e93a713b7f2c492a2ff5b07f29f7908b1eac8033b11c48ced8b5c4.scope - libcontainer container 651e67a694e93a713b7f2c492a2ff5b07f29f7908b1eac8033b11c48ced8b5c4. Dec 16 03:31:41.246250 containerd[1605]: time="2025-12-16T03:31:41.246160633Z" level=info msg="connecting to shim b8f4cc0140791ebb1d3d1e06eab0def035657df60b94008c7ee39adb58267022" address="unix:///run/containerd/s/8b3c4a616779417caaa7db02c440ebfb12122d5b9017654eb5bdf343e85cc6fa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:31:41.256680 containerd[1605]: time="2025-12-16T03:31:41.256628819Z" level=info msg="connecting to shim 10a2ca2da07c234ba3aaa764b3e89e37bf35ac222793ca1973743e3c25ba99dc" address="unix:///run/containerd/s/e93d150e8651983d001761db2bf97520cc340af281bdca82ca96035d1956a720" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:31:41.276128 systemd[1]: Started cri-containerd-b8f4cc0140791ebb1d3d1e06eab0def035657df60b94008c7ee39adb58267022.scope - libcontainer container b8f4cc0140791ebb1d3d1e06eab0def035657df60b94008c7ee39adb58267022. Dec 16 03:31:41.281097 systemd[1]: Started cri-containerd-10a2ca2da07c234ba3aaa764b3e89e37bf35ac222793ca1973743e3c25ba99dc.scope - libcontainer container 10a2ca2da07c234ba3aaa764b3e89e37bf35ac222793ca1973743e3c25ba99dc. Dec 16 03:31:41.307254 containerd[1605]: time="2025-12-16T03:31:41.307203473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eabbbca885a25eb5f2f843525a74f081,Namespace:kube-system,Attempt:0,} returns sandbox id \"651e67a694e93a713b7f2c492a2ff5b07f29f7908b1eac8033b11c48ced8b5c4\"" Dec 16 03:31:41.308629 kubelet[2429]: E1216 03:31:41.308539 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:41.311036 containerd[1605]: time="2025-12-16T03:31:41.310999647Z" level=info msg="CreateContainer within sandbox \"651e67a694e93a713b7f2c492a2ff5b07f29f7908b1eac8033b11c48ced8b5c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:31:41.321965 containerd[1605]: time="2025-12-16T03:31:41.321893694Z" level=info msg="Container 3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:41.334024 containerd[1605]: time="2025-12-16T03:31:41.333925023Z" level=info msg="CreateContainer within sandbox \"651e67a694e93a713b7f2c492a2ff5b07f29f7908b1eac8033b11c48ced8b5c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f\"" Dec 16 03:31:41.336127 containerd[1605]: time="2025-12-16T03:31:41.336087900Z" level=info msg="StartContainer for \"3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f\"" Dec 16 03:31:41.337551 containerd[1605]: time="2025-12-16T03:31:41.337509768Z" level=info msg="connecting to shim 3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f" address="unix:///run/containerd/s/340b388385ca3a9a16bdcae5451c388de0f1f5cf15f18ccaba3202e7756fb886" protocol=ttrpc version=3 Dec 16 03:31:41.350652 containerd[1605]: time="2025-12-16T03:31:41.350598611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"10a2ca2da07c234ba3aaa764b3e89e37bf35ac222793ca1973743e3c25ba99dc\"" Dec 16 03:31:41.352062 kubelet[2429]: E1216 03:31:41.351932 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:41.352142 containerd[1605]: time="2025-12-16T03:31:41.352061708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8f4cc0140791ebb1d3d1e06eab0def035657df60b94008c7ee39adb58267022\"" Dec 16 03:31:41.352606 kubelet[2429]: E1216 03:31:41.352565 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:41.354866 containerd[1605]: time="2025-12-16T03:31:41.354826077Z" level=info msg="CreateContainer within sandbox \"b8f4cc0140791ebb1d3d1e06eab0def035657df60b94008c7ee39adb58267022\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:31:41.355166 containerd[1605]: time="2025-12-16T03:31:41.355120421Z" level=info msg="CreateContainer within sandbox \"10a2ca2da07c234ba3aaa764b3e89e37bf35ac222793ca1973743e3c25ba99dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:31:41.367076 containerd[1605]: time="2025-12-16T03:31:41.367018971Z" level=info msg="Container 91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:41.378162 containerd[1605]: time="2025-12-16T03:31:41.377305103Z" level=info msg="Container ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:41.381184 systemd[1]: Started cri-containerd-3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f.scope - libcontainer container 3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f. Dec 16 03:31:41.383705 containerd[1605]: time="2025-12-16T03:31:41.383664765Z" level=info msg="CreateContainer within sandbox \"10a2ca2da07c234ba3aaa764b3e89e37bf35ac222793ca1973743e3c25ba99dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380\"" Dec 16 03:31:41.385490 containerd[1605]: time="2025-12-16T03:31:41.384688325Z" level=info msg="StartContainer for \"91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380\"" Dec 16 03:31:41.386213 containerd[1605]: time="2025-12-16T03:31:41.386185746Z" level=info msg="connecting to shim 91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380" address="unix:///run/containerd/s/e93d150e8651983d001761db2bf97520cc340af281bdca82ca96035d1956a720" protocol=ttrpc version=3 Dec 16 03:31:41.388361 containerd[1605]: time="2025-12-16T03:31:41.388317994Z" level=info msg="CreateContainer within sandbox \"b8f4cc0140791ebb1d3d1e06eab0def035657df60b94008c7ee39adb58267022\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7\"" Dec 16 03:31:41.388923 containerd[1605]: time="2025-12-16T03:31:41.388901954Z" level=info msg="StartContainer for \"ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7\"" Dec 16 03:31:41.391479 containerd[1605]: time="2025-12-16T03:31:41.391449795Z" level=info msg="connecting to shim ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7" address="unix:///run/containerd/s/8b3c4a616779417caaa7db02c440ebfb12122d5b9017654eb5bdf343e85cc6fa" protocol=ttrpc version=3 Dec 16 03:31:41.412138 systemd[1]: Started cri-containerd-91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380.scope - libcontainer container 91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380. Dec 16 03:31:41.416053 systemd[1]: Started cri-containerd-ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7.scope - libcontainer container ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7. Dec 16 03:31:41.454276 containerd[1605]: time="2025-12-16T03:31:41.454186457Z" level=info msg="StartContainer for \"3255b5108b5ee1a4400ac7bdec1b3986d6b6239bca588608697c2e241f1d4b7f\" returns successfully" Dec 16 03:31:41.469552 containerd[1605]: time="2025-12-16T03:31:41.469508016Z" level=info msg="StartContainer for \"91aa4645c522a81e8c4143cf4cb8999722a3e11885e070522b6e44a6e803a380\" returns successfully" Dec 16 03:31:41.484821 containerd[1605]: time="2025-12-16T03:31:41.484759604Z" level=info msg="StartContainer for \"ac447d38fdc7f2b0e903f02e503b9d42ebc9ed2e1645991086d474886caee1e7\" returns successfully" Dec 16 03:31:41.740705 kubelet[2429]: I1216 03:31:41.740573 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:31:42.304989 kubelet[2429]: E1216 03:31:42.304927 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:31:42.305934 kubelet[2429]: E1216 03:31:42.305615 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:42.311873 kubelet[2429]: E1216 03:31:42.311846 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:31:42.312279 kubelet[2429]: E1216 03:31:42.312219 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:42.314732 kubelet[2429]: E1216 03:31:42.314690 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:31:42.315006 kubelet[2429]: E1216 03:31:42.314867 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:42.801481 kubelet[2429]: E1216 03:31:42.801433 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 03:31:42.869111 kubelet[2429]: I1216 03:31:42.869056 2429 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 03:31:42.875340 kubelet[2429]: I1216 03:31:42.875297 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:42.881287 kubelet[2429]: E1216 03:31:42.881238 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:42.881287 kubelet[2429]: I1216 03:31:42.881273 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:42.882971 kubelet[2429]: E1216 03:31:42.882858 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:42.882971 kubelet[2429]: I1216 03:31:42.882889 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:42.884622 kubelet[2429]: E1216 03:31:42.884586 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:43.234632 kubelet[2429]: I1216 03:31:43.234492 2429 apiserver.go:52] "Watching apiserver" Dec 16 03:31:43.268559 kubelet[2429]: I1216 03:31:43.268478 2429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:31:43.314085 kubelet[2429]: I1216 03:31:43.314054 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:43.314472 kubelet[2429]: I1216 03:31:43.314212 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:43.316369 kubelet[2429]: E1216 03:31:43.316333 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:43.316431 kubelet[2429]: E1216 03:31:43.316338 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:43.316517 kubelet[2429]: E1216 03:31:43.316495 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:43.316575 kubelet[2429]: E1216 03:31:43.316560 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:44.314708 kubelet[2429]: I1216 03:31:44.314677 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:44.320603 kubelet[2429]: E1216 03:31:44.320549 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:44.779223 systemd[1]: Reload requested from client PID 2707 ('systemctl') (unit session-6.scope)... Dec 16 03:31:44.779240 systemd[1]: Reloading... Dec 16 03:31:44.848110 zram_generator::config[2749]: No configuration found. Dec 16 03:31:45.182352 systemd[1]: Reloading finished in 402 ms. Dec 16 03:31:45.207507 kubelet[2429]: I1216 03:31:45.207409 2429 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:31:45.207529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:45.222458 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:31:45.222907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:45.222991 systemd[1]: kubelet.service: Consumed 903ms CPU time, 131.2M memory peak. Dec 16 03:31:45.225340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:31:45.447038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:31:45.465388 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:31:45.509441 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:31:45.509441 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:31:45.509441 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:31:45.509845 kubelet[2798]: I1216 03:31:45.509514 2798 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:31:45.517306 kubelet[2798]: I1216 03:31:45.517260 2798 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:31:45.517306 kubelet[2798]: I1216 03:31:45.517294 2798 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:31:45.517663 kubelet[2798]: I1216 03:31:45.517631 2798 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:31:45.519070 kubelet[2798]: I1216 03:31:45.519043 2798 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 03:31:45.522066 kubelet[2798]: I1216 03:31:45.522008 2798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:31:45.532754 kubelet[2798]: I1216 03:31:45.532732 2798 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:31:45.538637 kubelet[2798]: I1216 03:31:45.538584 2798 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:31:45.538970 kubelet[2798]: I1216 03:31:45.538892 2798 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:31:45.539158 kubelet[2798]: I1216 03:31:45.538932 2798 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:31:45.539262 kubelet[2798]: I1216 03:31:45.539159 2798 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:31:45.539262 kubelet[2798]: I1216 03:31:45.539169 2798 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:31:45.539262 kubelet[2798]: I1216 03:31:45.539229 2798 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:31:45.539430 kubelet[2798]: I1216 03:31:45.539411 2798 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:31:45.539468 kubelet[2798]: I1216 03:31:45.539444 2798 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:31:45.539508 kubelet[2798]: I1216 03:31:45.539469 2798 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:31:45.539508 kubelet[2798]: I1216 03:31:45.539482 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:31:45.540396 kubelet[2798]: I1216 03:31:45.540353 2798 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:31:45.540854 kubelet[2798]: I1216 03:31:45.540832 2798 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:31:45.541541 kubelet[2798]: I1216 03:31:45.541501 2798 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:31:45.541541 kubelet[2798]: I1216 03:31:45.541541 2798 server.go:1287] "Started kubelet" Dec 16 03:31:45.541918 kubelet[2798]: I1216 03:31:45.541881 2798 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:31:45.543224 kubelet[2798]: I1216 03:31:45.543135 2798 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:31:45.543518 kubelet[2798]: I1216 03:31:45.543436 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:31:45.545995 kubelet[2798]: I1216 03:31:45.543880 2798 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:31:45.549980 kubelet[2798]: I1216 03:31:45.544599 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:31:45.549980 kubelet[2798]: I1216 03:31:45.548769 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:31:45.551478 kubelet[2798]: I1216 03:31:45.551428 2798 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:31:45.551716 kubelet[2798]: E1216 03:31:45.551675 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:31:45.552542 kubelet[2798]: I1216 03:31:45.552518 2798 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:31:45.552834 kubelet[2798]: I1216 03:31:45.552811 2798 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:31:45.555676 kubelet[2798]: I1216 03:31:45.555644 2798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:31:45.560119 kubelet[2798]: I1216 03:31:45.560068 2798 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:31:45.560260 kubelet[2798]: I1216 03:31:45.560141 2798 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:31:45.572108 kubelet[2798]: I1216 03:31:45.571909 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:31:45.574751 kubelet[2798]: I1216 03:31:45.574712 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:31:45.574830 kubelet[2798]: I1216 03:31:45.574760 2798 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:31:45.574830 kubelet[2798]: I1216 03:31:45.574782 2798 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:31:45.574830 kubelet[2798]: I1216 03:31:45.574792 2798 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:31:45.574962 kubelet[2798]: E1216 03:31:45.574910 2798 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:31:45.612061 kubelet[2798]: I1216 03:31:45.612027 2798 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:31:45.612061 kubelet[2798]: I1216 03:31:45.612045 2798 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:31:45.612061 kubelet[2798]: I1216 03:31:45.612067 2798 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:31:45.612276 kubelet[2798]: I1216 03:31:45.612228 2798 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:31:45.612276 kubelet[2798]: I1216 03:31:45.612239 2798 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:31:45.612276 kubelet[2798]: I1216 03:31:45.612256 2798 policy_none.go:49] "None policy: Start" Dec 16 03:31:45.612276 kubelet[2798]: I1216 03:31:45.612267 2798 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:31:45.612276 kubelet[2798]: I1216 03:31:45.612277 2798 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:31:45.612416 kubelet[2798]: I1216 03:31:45.612372 2798 state_mem.go:75] "Updated machine memory state" Dec 16 03:31:45.618065 kubelet[2798]: I1216 03:31:45.618036 2798 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:31:45.618695 kubelet[2798]: I1216 03:31:45.618247 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:31:45.618695 kubelet[2798]: I1216 03:31:45.618260 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:31:45.618695 kubelet[2798]: I1216 03:31:45.618476 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:31:45.621325 kubelet[2798]: E1216 03:31:45.621268 2798 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:31:45.675615 kubelet[2798]: I1216 03:31:45.675576 2798 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:45.675778 kubelet[2798]: I1216 03:31:45.675668 2798 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:45.676055 kubelet[2798]: I1216 03:31:45.676001 2798 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:45.691496 kubelet[2798]: E1216 03:31:45.691432 2798 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:45.726127 kubelet[2798]: I1216 03:31:45.726016 2798 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:31:45.753668 kubelet[2798]: I1216 03:31:45.753619 2798 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 03:31:45.753824 kubelet[2798]: I1216 03:31:45.753749 2798 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 03:31:45.754146 kubelet[2798]: I1216 03:31:45.754101 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:45.754211 kubelet[2798]: I1216 03:31:45.754172 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:45.754211 kubelet[2798]: I1216 03:31:45.754195 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:45.754353 kubelet[2798]: I1216 03:31:45.754211 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eabbbca885a25eb5f2f843525a74f081-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eabbbca885a25eb5f2f843525a74f081\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:45.754353 kubelet[2798]: I1216 03:31:45.754250 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eabbbca885a25eb5f2f843525a74f081-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eabbbca885a25eb5f2f843525a74f081\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:45.754353 kubelet[2798]: I1216 03:31:45.754266 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eabbbca885a25eb5f2f843525a74f081-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eabbbca885a25eb5f2f843525a74f081\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:45.754353 kubelet[2798]: I1216 03:31:45.754279 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:45.754353 kubelet[2798]: I1216 03:31:45.754293 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:45.754579 kubelet[2798]: I1216 03:31:45.754330 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:31:45.981702 kubelet[2798]: E1216 03:31:45.981580 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:45.987886 kubelet[2798]: E1216 03:31:45.987855 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:45.991839 kubelet[2798]: E1216 03:31:45.991813 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:46.540441 kubelet[2798]: I1216 03:31:46.540391 2798 apiserver.go:52] "Watching apiserver" Dec 16 03:31:46.553168 kubelet[2798]: I1216 03:31:46.553117 2798 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:31:46.591495 kubelet[2798]: I1216 03:31:46.591448 2798 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:46.592106 kubelet[2798]: I1216 03:31:46.591645 2798 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:46.592448 kubelet[2798]: E1216 03:31:46.592428 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:46.790288 kubelet[2798]: E1216 03:31:46.790149 2798 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 03:31:46.790288 kubelet[2798]: E1216 03:31:46.790294 2798 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 03:31:46.790595 kubelet[2798]: E1216 03:31:46.790415 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:46.790595 kubelet[2798]: E1216 03:31:46.790565 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:47.228369 kubelet[2798]: I1216 03:31:47.227809 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.227736456 podStartE2EDuration="2.227736456s" podCreationTimestamp="2025-12-16 03:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:31:47.226747004 +0000 UTC m=+1.755433777" watchObservedRunningTime="2025-12-16 03:31:47.227736456 +0000 UTC m=+1.756423219" Dec 16 03:31:47.236424 kubelet[2798]: I1216 03:31:47.236347 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.2363282 podStartE2EDuration="2.2363282s" podCreationTimestamp="2025-12-16 03:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:31:47.236181504 +0000 UTC m=+1.764868267" watchObservedRunningTime="2025-12-16 03:31:47.2363282 +0000 UTC m=+1.765014963" Dec 16 03:31:47.245652 kubelet[2798]: I1216 03:31:47.245590 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.244751237 podStartE2EDuration="3.244751237s" podCreationTimestamp="2025-12-16 03:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:31:47.244574906 +0000 UTC m=+1.773261679" watchObservedRunningTime="2025-12-16 03:31:47.244751237 +0000 UTC m=+1.773438000" Dec 16 03:31:47.379727 sudo[1778]: pam_unix(sudo:session): session closed for user root Dec 16 03:31:47.382437 sshd[1777]: Connection closed by 10.0.0.1 port 39002 Dec 16 03:31:47.382758 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:47.387883 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:39002.service: Deactivated successfully. Dec 16 03:31:47.390306 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:31:47.390598 systemd[1]: session-6.scope: Consumed 4.077s CPU time, 219.4M memory peak. Dec 16 03:31:47.391963 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:31:47.393188 systemd-logind[1591]: Removed session 6. Dec 16 03:31:47.593699 kubelet[2798]: E1216 03:31:47.593627 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:47.593699 kubelet[2798]: E1216 03:31:47.593681 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:49.747184 kubelet[2798]: E1216 03:31:49.747145 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:49.771112 kubelet[2798]: I1216 03:31:49.771083 2798 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:31:49.771488 containerd[1605]: time="2025-12-16T03:31:49.771448712Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:31:49.771865 kubelet[2798]: I1216 03:31:49.771655 2798 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:31:50.438016 systemd[1]: Created slice kubepods-besteffort-pod78c9d496_c08f_49ff_84d0_f0294588fc5c.slice - libcontainer container kubepods-besteffort-pod78c9d496_c08f_49ff_84d0_f0294588fc5c.slice. Dec 16 03:31:50.458309 systemd[1]: Created slice kubepods-burstable-pod89d1e93a_3bb3_4ae4_a5d5_e30746a2b348.slice - libcontainer container kubepods-burstable-pod89d1e93a_3bb3_4ae4_a5d5_e30746a2b348.slice. Dec 16 03:31:50.485538 kubelet[2798]: I1216 03:31:50.485472 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/89d1e93a-3bb3-4ae4-a5d5-e30746a2b348-cni-plugin\") pod \"kube-flannel-ds-dgz4q\" (UID: \"89d1e93a-3bb3-4ae4-a5d5-e30746a2b348\") " pod="kube-flannel/kube-flannel-ds-dgz4q" Dec 16 03:31:50.485538 kubelet[2798]: I1216 03:31:50.485526 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89d1e93a-3bb3-4ae4-a5d5-e30746a2b348-xtables-lock\") pod \"kube-flannel-ds-dgz4q\" (UID: \"89d1e93a-3bb3-4ae4-a5d5-e30746a2b348\") " pod="kube-flannel/kube-flannel-ds-dgz4q" Dec 16 03:31:50.485765 kubelet[2798]: I1216 03:31:50.485558 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78c9d496-c08f-49ff-84d0-f0294588fc5c-kube-proxy\") pod \"kube-proxy-pz9gj\" (UID: \"78c9d496-c08f-49ff-84d0-f0294588fc5c\") " pod="kube-system/kube-proxy-pz9gj" Dec 16 03:31:50.485765 kubelet[2798]: I1216 03:31:50.485584 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78c9d496-c08f-49ff-84d0-f0294588fc5c-lib-modules\") pod \"kube-proxy-pz9gj\" (UID: \"78c9d496-c08f-49ff-84d0-f0294588fc5c\") " pod="kube-system/kube-proxy-pz9gj" Dec 16 03:31:50.485765 kubelet[2798]: I1216 03:31:50.485622 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/89d1e93a-3bb3-4ae4-a5d5-e30746a2b348-run\") pod \"kube-flannel-ds-dgz4q\" (UID: \"89d1e93a-3bb3-4ae4-a5d5-e30746a2b348\") " pod="kube-flannel/kube-flannel-ds-dgz4q" Dec 16 03:31:50.485765 kubelet[2798]: I1216 03:31:50.485646 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65kv\" (UniqueName: \"kubernetes.io/projected/89d1e93a-3bb3-4ae4-a5d5-e30746a2b348-kube-api-access-c65kv\") pod \"kube-flannel-ds-dgz4q\" (UID: \"89d1e93a-3bb3-4ae4-a5d5-e30746a2b348\") " pod="kube-flannel/kube-flannel-ds-dgz4q" Dec 16 03:31:50.486031 kubelet[2798]: I1216 03:31:50.485768 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/89d1e93a-3bb3-4ae4-a5d5-e30746a2b348-cni\") pod \"kube-flannel-ds-dgz4q\" (UID: \"89d1e93a-3bb3-4ae4-a5d5-e30746a2b348\") " pod="kube-flannel/kube-flannel-ds-dgz4q" Dec 16 03:31:50.486031 kubelet[2798]: I1216 03:31:50.485835 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78c9d496-c08f-49ff-84d0-f0294588fc5c-xtables-lock\") pod \"kube-proxy-pz9gj\" (UID: \"78c9d496-c08f-49ff-84d0-f0294588fc5c\") " pod="kube-system/kube-proxy-pz9gj" Dec 16 03:31:50.486031 kubelet[2798]: I1216 03:31:50.485871 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfr6\" (UniqueName: \"kubernetes.io/projected/78c9d496-c08f-49ff-84d0-f0294588fc5c-kube-api-access-bjfr6\") pod \"kube-proxy-pz9gj\" (UID: \"78c9d496-c08f-49ff-84d0-f0294588fc5c\") " pod="kube-system/kube-proxy-pz9gj" Dec 16 03:31:50.486031 kubelet[2798]: I1216 03:31:50.485898 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/89d1e93a-3bb3-4ae4-a5d5-e30746a2b348-flannel-cfg\") pod \"kube-flannel-ds-dgz4q\" (UID: \"89d1e93a-3bb3-4ae4-a5d5-e30746a2b348\") " pod="kube-flannel/kube-flannel-ds-dgz4q" Dec 16 03:31:50.509574 kubelet[2798]: E1216 03:31:50.508933 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:50.753678 kubelet[2798]: E1216 03:31:50.753501 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:50.754368 containerd[1605]: time="2025-12-16T03:31:50.754331314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pz9gj,Uid:78c9d496-c08f-49ff-84d0-f0294588fc5c,Namespace:kube-system,Attempt:0,}" Dec 16 03:31:50.762869 kubelet[2798]: E1216 03:31:50.762840 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:50.763332 containerd[1605]: time="2025-12-16T03:31:50.763218073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dgz4q,Uid:89d1e93a-3bb3-4ae4-a5d5-e30746a2b348,Namespace:kube-flannel,Attempt:0,}" Dec 16 03:31:50.832510 containerd[1605]: time="2025-12-16T03:31:50.831931597Z" level=info msg="connecting to shim f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d" address="unix:///run/containerd/s/eed0db14ba1e8279c658dc8fdab5893226510274e6db5f2c704b647c8c43bdc9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:31:50.833095 containerd[1605]: time="2025-12-16T03:31:50.833071381Z" level=info msg="connecting to shim 9355eeb76356a35fda957f1632c2f1a0bee2808d1e5aead7349be552c760c1d2" address="unix:///run/containerd/s/6afe3c59840cd8bdb84cdfb1d627439032987f03f582d04b991594b9f076959e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:31:50.894160 systemd[1]: Started cri-containerd-9355eeb76356a35fda957f1632c2f1a0bee2808d1e5aead7349be552c760c1d2.scope - libcontainer container 9355eeb76356a35fda957f1632c2f1a0bee2808d1e5aead7349be552c760c1d2. Dec 16 03:31:50.896001 systemd[1]: Started cri-containerd-f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d.scope - libcontainer container f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d. Dec 16 03:31:50.927873 containerd[1605]: time="2025-12-16T03:31:50.927833124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pz9gj,Uid:78c9d496-c08f-49ff-84d0-f0294588fc5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9355eeb76356a35fda957f1632c2f1a0bee2808d1e5aead7349be552c760c1d2\"" Dec 16 03:31:50.928790 kubelet[2798]: E1216 03:31:50.928764 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:50.931730 containerd[1605]: time="2025-12-16T03:31:50.931667173Z" level=info msg="CreateContainer within sandbox \"9355eeb76356a35fda957f1632c2f1a0bee2808d1e5aead7349be552c760c1d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:31:50.945694 containerd[1605]: time="2025-12-16T03:31:50.945634356Z" level=info msg="Container e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:50.953257 containerd[1605]: time="2025-12-16T03:31:50.953220802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dgz4q,Uid:89d1e93a-3bb3-4ae4-a5d5-e30746a2b348,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\"" Dec 16 03:31:50.953929 kubelet[2798]: E1216 03:31:50.953889 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:50.954933 containerd[1605]: time="2025-12-16T03:31:50.954886603Z" level=info msg="CreateContainer within sandbox \"9355eeb76356a35fda957f1632c2f1a0bee2808d1e5aead7349be552c760c1d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b\"" Dec 16 03:31:50.955469 containerd[1605]: time="2025-12-16T03:31:50.955423954Z" level=info msg="StartContainer for \"e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b\"" Dec 16 03:31:50.956056 containerd[1605]: time="2025-12-16T03:31:50.956019284Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 16 03:31:50.956840 containerd[1605]: time="2025-12-16T03:31:50.956815621Z" level=info msg="connecting to shim e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b" address="unix:///run/containerd/s/6afe3c59840cd8bdb84cdfb1d627439032987f03f582d04b991594b9f076959e" protocol=ttrpc version=3 Dec 16 03:31:50.987167 systemd[1]: Started cri-containerd-e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b.scope - libcontainer container e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b. Dec 16 03:31:51.075183 containerd[1605]: time="2025-12-16T03:31:51.075132633Z" level=info msg="StartContainer for \"e8f6732a49fb1b8e37ff09d3b61eb97699e05a672aac47781e207f2c8906195b\" returns successfully" Dec 16 03:31:51.602763 kubelet[2798]: E1216 03:31:51.602733 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:51.613262 kubelet[2798]: I1216 03:31:51.613187 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pz9gj" podStartSLOduration=1.613168471 podStartE2EDuration="1.613168471s" podCreationTimestamp="2025-12-16 03:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:31:51.612352488 +0000 UTC m=+6.141039271" watchObservedRunningTime="2025-12-16 03:31:51.613168471 +0000 UTC m=+6.141855234" Dec 16 03:31:52.150117 kubelet[2798]: E1216 03:31:52.150058 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:52.606501 kubelet[2798]: E1216 03:31:52.606465 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:52.656535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820389864.mount: Deactivated successfully. Dec 16 03:31:52.701252 containerd[1605]: time="2025-12-16T03:31:52.701192436Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:52.702071 containerd[1605]: time="2025-12-16T03:31:52.702043886Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Dec 16 03:31:52.703373 containerd[1605]: time="2025-12-16T03:31:52.703327499Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:52.705669 containerd[1605]: time="2025-12-16T03:31:52.705634886Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:52.706459 containerd[1605]: time="2025-12-16T03:31:52.706423299Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.750370011s" Dec 16 03:31:52.706510 containerd[1605]: time="2025-12-16T03:31:52.706460629Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 16 03:31:52.708357 containerd[1605]: time="2025-12-16T03:31:52.708311428Z" level=info msg="CreateContainer within sandbox \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 16 03:31:52.715617 containerd[1605]: time="2025-12-16T03:31:52.715557148Z" level=info msg="Container 29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:52.722492 containerd[1605]: time="2025-12-16T03:31:52.722443302Z" level=info msg="CreateContainer within sandbox \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393\"" Dec 16 03:31:52.724050 containerd[1605]: time="2025-12-16T03:31:52.723003605Z" level=info msg="StartContainer for \"29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393\"" Dec 16 03:31:52.724050 containerd[1605]: time="2025-12-16T03:31:52.723971885Z" level=info msg="connecting to shim 29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393" address="unix:///run/containerd/s/eed0db14ba1e8279c658dc8fdab5893226510274e6db5f2c704b647c8c43bdc9" protocol=ttrpc version=3 Dec 16 03:31:52.753243 systemd[1]: Started cri-containerd-29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393.scope - libcontainer container 29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393. Dec 16 03:31:52.789301 systemd[1]: cri-containerd-29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393.scope: Deactivated successfully. Dec 16 03:31:52.790831 containerd[1605]: time="2025-12-16T03:31:52.790798982Z" level=info msg="StartContainer for \"29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393\" returns successfully" Dec 16 03:31:52.791757 containerd[1605]: time="2025-12-16T03:31:52.791720464Z" level=info msg="received container exit event container_id:\"29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393\" id:\"29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393\" pid:3144 exited_at:{seconds:1765855912 nanos:789937592}" Dec 16 03:31:52.814920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29b4ab98c24b21b7fb656fabbbc13269518f5cbae5f88b7a8ed581ac43928393-rootfs.mount: Deactivated successfully. Dec 16 03:31:53.608797 kubelet[2798]: E1216 03:31:53.608755 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:53.609364 containerd[1605]: time="2025-12-16T03:31:53.609301148Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 16 03:31:55.940635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094828979.mount: Deactivated successfully. Dec 16 03:31:56.406004 containerd[1605]: time="2025-12-16T03:31:56.405931605Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:56.406761 containerd[1605]: time="2025-12-16T03:31:56.406714757Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=21654783" Dec 16 03:31:56.407861 containerd[1605]: time="2025-12-16T03:31:56.407821115Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:56.410491 containerd[1605]: time="2025-12-16T03:31:56.410465373Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:31:56.411301 containerd[1605]: time="2025-12-16T03:31:56.411270185Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.801937277s" Dec 16 03:31:56.411301 containerd[1605]: time="2025-12-16T03:31:56.411299801Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 16 03:31:56.413088 containerd[1605]: time="2025-12-16T03:31:56.413041322Z" level=info msg="CreateContainer within sandbox \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:31:56.421251 containerd[1605]: time="2025-12-16T03:31:56.421190893Z" level=info msg="Container ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:56.425899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481786822.mount: Deactivated successfully. Dec 16 03:31:56.429461 containerd[1605]: time="2025-12-16T03:31:56.429414714Z" level=info msg="CreateContainer within sandbox \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004\"" Dec 16 03:31:56.431379 containerd[1605]: time="2025-12-16T03:31:56.430004702Z" level=info msg="StartContainer for \"ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004\"" Dec 16 03:31:56.431379 containerd[1605]: time="2025-12-16T03:31:56.431018086Z" level=info msg="connecting to shim ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004" address="unix:///run/containerd/s/eed0db14ba1e8279c658dc8fdab5893226510274e6db5f2c704b647c8c43bdc9" protocol=ttrpc version=3 Dec 16 03:31:56.455147 systemd[1]: Started cri-containerd-ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004.scope - libcontainer container ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004. Dec 16 03:31:56.489710 systemd[1]: cri-containerd-ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004.scope: Deactivated successfully. Dec 16 03:31:56.490910 containerd[1605]: time="2025-12-16T03:31:56.490868489Z" level=info msg="received container exit event container_id:\"ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004\" id:\"ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004\" pid:3215 exited_at:{seconds:1765855916 nanos:489887205}" Dec 16 03:31:56.492343 containerd[1605]: time="2025-12-16T03:31:56.492254503Z" level=info msg="StartContainer for \"ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004\" returns successfully" Dec 16 03:31:56.557011 kubelet[2798]: I1216 03:31:56.556977 2798 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:31:56.639284 kubelet[2798]: E1216 03:31:56.639237 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:56.837996 systemd[1]: Created slice kubepods-burstable-poda02b33d1_f681_49cc_8b20_996ec231c4b5.slice - libcontainer container kubepods-burstable-poda02b33d1_f681_49cc_8b20_996ec231c4b5.slice. Dec 16 03:31:56.846612 systemd[1]: Created slice kubepods-burstable-poda3ef2d17_b1dd_4d99_a1ee_e1cbdd699304.slice - libcontainer container kubepods-burstable-poda3ef2d17_b1dd_4d99_a1ee_e1cbdd699304.slice. Dec 16 03:31:56.863122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac2cced6aa9139cb96b3467a4e8ba2b06a55a304a0be70982b374ba728763004-rootfs.mount: Deactivated successfully. Dec 16 03:31:56.927499 kubelet[2798]: I1216 03:31:56.927376 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a02b33d1-f681-49cc-8b20-996ec231c4b5-config-volume\") pod \"coredns-668d6bf9bc-jm6wt\" (UID: \"a02b33d1-f681-49cc-8b20-996ec231c4b5\") " pod="kube-system/coredns-668d6bf9bc-jm6wt" Dec 16 03:31:56.927499 kubelet[2798]: I1216 03:31:56.927451 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjwv\" (UniqueName: \"kubernetes.io/projected/a02b33d1-f681-49cc-8b20-996ec231c4b5-kube-api-access-qfjwv\") pod \"coredns-668d6bf9bc-jm6wt\" (UID: \"a02b33d1-f681-49cc-8b20-996ec231c4b5\") " pod="kube-system/coredns-668d6bf9bc-jm6wt" Dec 16 03:31:56.927499 kubelet[2798]: I1216 03:31:56.927485 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304-config-volume\") pod \"coredns-668d6bf9bc-wbsbt\" (UID: \"a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304\") " pod="kube-system/coredns-668d6bf9bc-wbsbt" Dec 16 03:31:56.927731 kubelet[2798]: I1216 03:31:56.927514 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnnp\" (UniqueName: \"kubernetes.io/projected/a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304-kube-api-access-klnnp\") pod \"coredns-668d6bf9bc-wbsbt\" (UID: \"a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304\") " pod="kube-system/coredns-668d6bf9bc-wbsbt" Dec 16 03:31:57.144644 kubelet[2798]: E1216 03:31:57.144077 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:57.145142 containerd[1605]: time="2025-12-16T03:31:57.145092742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jm6wt,Uid:a02b33d1-f681-49cc-8b20-996ec231c4b5,Namespace:kube-system,Attempt:0,}" Dec 16 03:31:57.150099 kubelet[2798]: E1216 03:31:57.150027 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:57.151224 containerd[1605]: time="2025-12-16T03:31:57.151169678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbsbt,Uid:a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304,Namespace:kube-system,Attempt:0,}" Dec 16 03:31:57.189224 containerd[1605]: time="2025-12-16T03:31:57.189146619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jm6wt,Uid:a02b33d1-f681-49cc-8b20-996ec231c4b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eb7fdd2fdc9e4221612da803ace8f04cf1b7e19a69ac4596ae4bb9df5d54c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 03:31:57.189627 kubelet[2798]: E1216 03:31:57.189553 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eb7fdd2fdc9e4221612da803ace8f04cf1b7e19a69ac4596ae4bb9df5d54c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 03:31:57.189712 kubelet[2798]: E1216 03:31:57.189684 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eb7fdd2fdc9e4221612da803ace8f04cf1b7e19a69ac4596ae4bb9df5d54c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-jm6wt" Dec 16 03:31:57.189752 kubelet[2798]: E1216 03:31:57.189721 2798 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eb7fdd2fdc9e4221612da803ace8f04cf1b7e19a69ac4596ae4bb9df5d54c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-jm6wt" Dec 16 03:31:57.189858 kubelet[2798]: E1216 03:31:57.189809 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jm6wt_kube-system(a02b33d1-f681-49cc-8b20-996ec231c4b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jm6wt_kube-system(a02b33d1-f681-49cc-8b20-996ec231c4b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc9eb7fdd2fdc9e4221612da803ace8f04cf1b7e19a69ac4596ae4bb9df5d54c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-jm6wt" podUID="a02b33d1-f681-49cc-8b20-996ec231c4b5" Dec 16 03:31:57.191647 containerd[1605]: time="2025-12-16T03:31:57.191578036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbsbt,Uid:a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37f58b6f278ae59871054e32f3645aa9c522e682e9acaf3c28b7600626ab11\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 03:31:57.191885 kubelet[2798]: E1216 03:31:57.191849 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37f58b6f278ae59871054e32f3645aa9c522e682e9acaf3c28b7600626ab11\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 03:31:57.191977 kubelet[2798]: E1216 03:31:57.191899 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37f58b6f278ae59871054e32f3645aa9c522e682e9acaf3c28b7600626ab11\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-wbsbt" Dec 16 03:31:57.191977 kubelet[2798]: E1216 03:31:57.191923 2798 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37f58b6f278ae59871054e32f3645aa9c522e682e9acaf3c28b7600626ab11\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-wbsbt" Dec 16 03:31:57.192074 kubelet[2798]: E1216 03:31:57.192002 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wbsbt_kube-system(a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wbsbt_kube-system(a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c37f58b6f278ae59871054e32f3645aa9c522e682e9acaf3c28b7600626ab11\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-wbsbt" podUID="a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304" Dec 16 03:31:57.648199 kubelet[2798]: E1216 03:31:57.648137 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:57.661243 containerd[1605]: time="2025-12-16T03:31:57.661146391Z" level=info msg="CreateContainer within sandbox \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 16 03:31:57.688555 containerd[1605]: time="2025-12-16T03:31:57.688446535Z" level=info msg="Container bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:31:57.699176 containerd[1605]: time="2025-12-16T03:31:57.699084719Z" level=info msg="CreateContainer within sandbox \"f9a0b00ed3353b4433b11a6fe9608e7b76e3ace3a02c185ea78a1be38d46da9d\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379\"" Dec 16 03:31:57.700094 containerd[1605]: time="2025-12-16T03:31:57.700047958Z" level=info msg="StartContainer for \"bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379\"" Dec 16 03:31:57.701544 containerd[1605]: time="2025-12-16T03:31:57.701474278Z" level=info msg="connecting to shim bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379" address="unix:///run/containerd/s/eed0db14ba1e8279c658dc8fdab5893226510274e6db5f2c704b647c8c43bdc9" protocol=ttrpc version=3 Dec 16 03:31:57.731354 systemd[1]: Started cri-containerd-bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379.scope - libcontainer container bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379. Dec 16 03:31:57.792820 containerd[1605]: time="2025-12-16T03:31:57.792777583Z" level=info msg="StartContainer for \"bf900129df7cb641dffa01af64ffbb2c0c7bd498d71e9302a9fa99add903a379\" returns successfully" Dec 16 03:31:57.864884 systemd[1]: run-netns-cni\x2dfbf5978e\x2d9433\x2d8b7f\x2dc2c4\x2d3edebf898c34.mount: Deactivated successfully. Dec 16 03:31:57.865051 systemd[1]: run-netns-cni\x2d657dfaa8\x2da1e5\x2d2dca\x2d0f12\x2dd9aac6dd23d8.mount: Deactivated successfully. Dec 16 03:31:58.660786 kubelet[2798]: E1216 03:31:58.660676 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:58.946848 systemd-networkd[1505]: flannel.1: Link UP Dec 16 03:31:58.946861 systemd-networkd[1505]: flannel.1: Gained carrier Dec 16 03:31:59.667164 kubelet[2798]: E1216 03:31:59.666805 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:59.756580 kubelet[2798]: E1216 03:31:59.756252 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:31:59.793436 kubelet[2798]: I1216 03:31:59.792592 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dgz4q" podStartSLOduration=4.336191388 podStartE2EDuration="9.792563197s" podCreationTimestamp="2025-12-16 03:31:50 +0000 UTC" firstStartedPulling="2025-12-16 03:31:50.955663504 +0000 UTC m=+5.484350267" lastFinishedPulling="2025-12-16 03:31:56.412035313 +0000 UTC m=+10.940722076" observedRunningTime="2025-12-16 03:31:58.70087132 +0000 UTC m=+13.229558084" watchObservedRunningTime="2025-12-16 03:31:59.792563197 +0000 UTC m=+14.321249960" Dec 16 03:32:00.526024 kubelet[2798]: E1216 03:32:00.524828 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:00.607259 systemd-networkd[1505]: flannel.1: Gained IPv6LL Dec 16 03:32:00.668938 kubelet[2798]: E1216 03:32:00.668871 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:08.576143 kubelet[2798]: E1216 03:32:08.575937 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:08.576851 containerd[1605]: time="2025-12-16T03:32:08.576792187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jm6wt,Uid:a02b33d1-f681-49cc-8b20-996ec231c4b5,Namespace:kube-system,Attempt:0,}" Dec 16 03:32:08.603575 systemd-networkd[1505]: cni0: Link UP Dec 16 03:32:08.603589 systemd-networkd[1505]: cni0: Gained carrier Dec 16 03:32:08.609150 systemd-networkd[1505]: cni0: Lost carrier Dec 16 03:32:08.613609 systemd-networkd[1505]: veth1a1dc282: Link UP Dec 16 03:32:08.616811 kernel: cni0: port 1(veth1a1dc282) entered blocking state Dec 16 03:32:08.616903 kernel: cni0: port 1(veth1a1dc282) entered disabled state Dec 16 03:32:08.619911 kernel: veth1a1dc282: entered allmulticast mode Dec 16 03:32:08.621046 kernel: veth1a1dc282: entered promiscuous mode Dec 16 03:32:08.628830 kernel: cni0: port 1(veth1a1dc282) entered blocking state Dec 16 03:32:08.628966 kernel: cni0: port 1(veth1a1dc282) entered forwarding state Dec 16 03:32:08.629104 systemd-networkd[1505]: veth1a1dc282: Gained carrier Dec 16 03:32:08.630104 systemd-networkd[1505]: cni0: Gained carrier Dec 16 03:32:08.637020 containerd[1605]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Dec 16 03:32:08.637020 containerd[1605]: delegateAdd: netconf sent to delegate plugin: Dec 16 03:32:08.675021 containerd[1605]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T03:32:08.674928610Z" level=info msg="connecting to shim e21afd37ac46e8278b0633f7312e527416ae655230a5b356e466aab6147284cb" address="unix:///run/containerd/s/4362ea2c1bd02d20ee05bb8bbd7795eccdc172e85f3bab75e7badab81db13708" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:32:08.706977 systemd[1]: Started cri-containerd-e21afd37ac46e8278b0633f7312e527416ae655230a5b356e466aab6147284cb.scope - libcontainer container e21afd37ac46e8278b0633f7312e527416ae655230a5b356e466aab6147284cb. Dec 16 03:32:08.724699 systemd-resolved[1440]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:32:08.764262 containerd[1605]: time="2025-12-16T03:32:08.764215563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jm6wt,Uid:a02b33d1-f681-49cc-8b20-996ec231c4b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e21afd37ac46e8278b0633f7312e527416ae655230a5b356e466aab6147284cb\"" Dec 16 03:32:08.765324 kubelet[2798]: E1216 03:32:08.765278 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:08.767615 containerd[1605]: time="2025-12-16T03:32:08.767578105Z" level=info msg="CreateContainer within sandbox \"e21afd37ac46e8278b0633f7312e527416ae655230a5b356e466aab6147284cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:32:08.779352 containerd[1605]: time="2025-12-16T03:32:08.779270681Z" level=info msg="Container f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:32:08.787409 containerd[1605]: time="2025-12-16T03:32:08.787356999Z" level=info msg="CreateContainer within sandbox \"e21afd37ac46e8278b0633f7312e527416ae655230a5b356e466aab6147284cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9\"" Dec 16 03:32:08.788213 containerd[1605]: time="2025-12-16T03:32:08.788178290Z" level=info msg="StartContainer for \"f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9\"" Dec 16 03:32:08.789367 containerd[1605]: time="2025-12-16T03:32:08.789337466Z" level=info msg="connecting to shim f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9" address="unix:///run/containerd/s/4362ea2c1bd02d20ee05bb8bbd7795eccdc172e85f3bab75e7badab81db13708" protocol=ttrpc version=3 Dec 16 03:32:08.811258 systemd[1]: Started cri-containerd-f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9.scope - libcontainer container f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9. Dec 16 03:32:08.855348 containerd[1605]: time="2025-12-16T03:32:08.855228777Z" level=info msg="StartContainer for \"f8ed1bad0864901231f69ab980106862bb62ad7e96c2bf24474440cf22c509f9\" returns successfully" Dec 16 03:32:09.708250 kubelet[2798]: E1216 03:32:09.708183 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:09.718523 kubelet[2798]: I1216 03:32:09.718471 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jm6wt" podStartSLOduration=19.718451893 podStartE2EDuration="19.718451893s" podCreationTimestamp="2025-12-16 03:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:32:09.71826884 +0000 UTC m=+24.246955603" watchObservedRunningTime="2025-12-16 03:32:09.718451893 +0000 UTC m=+24.247138656" Dec 16 03:32:10.527184 systemd-networkd[1505]: cni0: Gained IPv6LL Dec 16 03:32:10.527970 systemd-networkd[1505]: veth1a1dc282: Gained IPv6LL Dec 16 03:32:10.709714 kubelet[2798]: E1216 03:32:10.709669 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:11.575749 kubelet[2798]: E1216 03:32:11.575574 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:11.576225 containerd[1605]: time="2025-12-16T03:32:11.576170055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbsbt,Uid:a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304,Namespace:kube-system,Attempt:0,}" Dec 16 03:32:11.712240 kubelet[2798]: E1216 03:32:11.712198 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:11.761424 systemd-networkd[1505]: veth7eda253b: Link UP Dec 16 03:32:11.764859 kernel: cni0: port 2(veth7eda253b) entered blocking state Dec 16 03:32:11.765004 kernel: cni0: port 2(veth7eda253b) entered disabled state Dec 16 03:32:11.765032 kernel: veth7eda253b: entered allmulticast mode Dec 16 03:32:11.767296 kernel: veth7eda253b: entered promiscuous mode Dec 16 03:32:11.775214 kernel: cni0: port 2(veth7eda253b) entered blocking state Dec 16 03:32:11.775309 kernel: cni0: port 2(veth7eda253b) entered forwarding state Dec 16 03:32:11.775369 systemd-networkd[1505]: veth7eda253b: Gained carrier Dec 16 03:32:11.780486 containerd[1605]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Dec 16 03:32:11.780486 containerd[1605]: delegateAdd: netconf sent to delegate plugin: Dec 16 03:32:11.817454 containerd[1605]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T03:32:11.817167182Z" level=info msg="connecting to shim 85dd7ceaf9c167ecb870711087c768640984acde932edb73529fb55dc1ccf246" address="unix:///run/containerd/s/9d651594a5bc2cf2d44f6d6d10698e142c5f3de55979080a6e6cc88ace1da91a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:32:11.855302 systemd[1]: Started cri-containerd-85dd7ceaf9c167ecb870711087c768640984acde932edb73529fb55dc1ccf246.scope - libcontainer container 85dd7ceaf9c167ecb870711087c768640984acde932edb73529fb55dc1ccf246. Dec 16 03:32:11.874710 systemd-resolved[1440]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:32:11.914071 containerd[1605]: time="2025-12-16T03:32:11.914022575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbsbt,Uid:a3ef2d17-b1dd-4d99-a1ee-e1cbdd699304,Namespace:kube-system,Attempt:0,} returns sandbox id \"85dd7ceaf9c167ecb870711087c768640984acde932edb73529fb55dc1ccf246\"" Dec 16 03:32:11.915086 kubelet[2798]: E1216 03:32:11.915055 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:11.916826 containerd[1605]: time="2025-12-16T03:32:11.916782147Z" level=info msg="CreateContainer within sandbox \"85dd7ceaf9c167ecb870711087c768640984acde932edb73529fb55dc1ccf246\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:32:11.925984 containerd[1605]: time="2025-12-16T03:32:11.925925391Z" level=info msg="Container 7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:32:11.930392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328212335.mount: Deactivated successfully. Dec 16 03:32:11.934163 containerd[1605]: time="2025-12-16T03:32:11.934061409Z" level=info msg="CreateContainer within sandbox \"85dd7ceaf9c167ecb870711087c768640984acde932edb73529fb55dc1ccf246\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c\"" Dec 16 03:32:11.934824 containerd[1605]: time="2025-12-16T03:32:11.934785961Z" level=info msg="StartContainer for \"7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c\"" Dec 16 03:32:11.936074 containerd[1605]: time="2025-12-16T03:32:11.936046473Z" level=info msg="connecting to shim 7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c" address="unix:///run/containerd/s/9d651594a5bc2cf2d44f6d6d10698e142c5f3de55979080a6e6cc88ace1da91a" protocol=ttrpc version=3 Dec 16 03:32:11.960157 systemd[1]: Started cri-containerd-7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c.scope - libcontainer container 7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c. Dec 16 03:32:11.998317 containerd[1605]: time="2025-12-16T03:32:11.998266049Z" level=info msg="StartContainer for \"7f7817688ec250416f076974af248306ae41972c015eb74f613956f98e71b50c\" returns successfully" Dec 16 03:32:12.717004 kubelet[2798]: E1216 03:32:12.716017 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:12.728966 kubelet[2798]: I1216 03:32:12.728880 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wbsbt" podStartSLOduration=22.728854895 podStartE2EDuration="22.728854895s" podCreationTimestamp="2025-12-16 03:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:32:12.728576431 +0000 UTC m=+27.257263194" watchObservedRunningTime="2025-12-16 03:32:12.728854895 +0000 UTC m=+27.257541658" Dec 16 03:32:13.599181 systemd-networkd[1505]: veth7eda253b: Gained IPv6LL Dec 16 03:32:13.721566 kubelet[2798]: E1216 03:32:13.721523 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:14.720770 kubelet[2798]: E1216 03:32:14.720720 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:32:22.981480 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:36482.service - OpenSSH per-connection server daemon (10.0.0.1:36482). Dec 16 03:32:23.050504 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 36482 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:23.052536 sshd-session[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:23.057118 systemd-logind[1591]: New session 7 of user core. Dec 16 03:32:23.068089 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:32:23.180399 sshd[3762]: Connection closed by 10.0.0.1 port 36482 Dec 16 03:32:23.181064 sshd-session[3758]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:23.192155 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:36482.service: Deactivated successfully. Dec 16 03:32:23.195085 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:32:23.196905 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:32:23.198480 systemd-logind[1591]: Removed session 7. Dec 16 03:32:28.193139 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:36496.service - OpenSSH per-connection server daemon (10.0.0.1:36496). Dec 16 03:32:28.249049 sshd[3801]: Accepted publickey for core from 10.0.0.1 port 36496 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:28.251249 sshd-session[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:28.256005 systemd-logind[1591]: New session 8 of user core. Dec 16 03:32:28.267123 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:32:28.427589 sshd[3805]: Connection closed by 10.0.0.1 port 36496 Dec 16 03:32:28.427890 sshd-session[3801]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:28.432615 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:36496.service: Deactivated successfully. Dec 16 03:32:28.434913 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:32:28.435774 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:32:28.437081 systemd-logind[1591]: Removed session 8. Dec 16 03:32:33.445097 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:41614.service - OpenSSH per-connection server daemon (10.0.0.1:41614). Dec 16 03:32:33.504527 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:33.506410 sshd-session[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:33.511162 systemd-logind[1591]: New session 9 of user core. Dec 16 03:32:33.526121 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:32:33.600100 sshd[3844]: Connection closed by 10.0.0.1 port 41614 Dec 16 03:32:33.600438 sshd-session[3840]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:33.614881 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:41614.service: Deactivated successfully. Dec 16 03:32:33.617217 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:32:33.618253 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:32:33.621605 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:41624.service - OpenSSH per-connection server daemon (10.0.0.1:41624). Dec 16 03:32:33.622443 systemd-logind[1591]: Removed session 9. Dec 16 03:32:33.677300 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 41624 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:33.679311 sshd-session[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:33.684069 systemd-logind[1591]: New session 10 of user core. Dec 16 03:32:33.698102 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:32:33.813670 sshd[3863]: Connection closed by 10.0.0.1 port 41624 Dec 16 03:32:33.814063 sshd-session[3859]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:33.825165 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:41624.service: Deactivated successfully. Dec 16 03:32:33.828273 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:32:33.832281 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:32:33.833909 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:41634.service - OpenSSH per-connection server daemon (10.0.0.1:41634). Dec 16 03:32:33.835156 systemd-logind[1591]: Removed session 10. Dec 16 03:32:33.889748 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 41634 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:33.891778 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:33.897175 systemd-logind[1591]: New session 11 of user core. Dec 16 03:32:33.904171 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:32:33.983345 sshd[3878]: Connection closed by 10.0.0.1 port 41634 Dec 16 03:32:33.983601 sshd-session[3874]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:33.989256 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:41634.service: Deactivated successfully. Dec 16 03:32:33.991584 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:32:33.992549 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:32:33.993990 systemd-logind[1591]: Removed session 11. Dec 16 03:32:38.997744 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:41640.service - OpenSSH per-connection server daemon (10.0.0.1:41640). Dec 16 03:32:39.042390 sshd[3913]: Accepted publickey for core from 10.0.0.1 port 41640 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:39.044222 sshd-session[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:39.048696 systemd-logind[1591]: New session 12 of user core. Dec 16 03:32:39.059101 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:32:39.137105 sshd[3917]: Connection closed by 10.0.0.1 port 41640 Dec 16 03:32:39.137423 sshd-session[3913]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:39.142014 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:41640.service: Deactivated successfully. Dec 16 03:32:39.143966 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:32:39.145002 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:32:39.146331 systemd-logind[1591]: Removed session 12. Dec 16 03:32:44.150042 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). Dec 16 03:32:44.205778 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:44.207750 sshd-session[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:44.212344 systemd-logind[1591]: New session 13 of user core. Dec 16 03:32:44.218143 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:32:44.294226 sshd[3961]: Connection closed by 10.0.0.1 port 53938 Dec 16 03:32:44.294557 sshd-session[3951]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:44.298274 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:53938.service: Deactivated successfully. Dec 16 03:32:44.300289 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:32:44.302360 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:32:44.303275 systemd-logind[1591]: Removed session 13. Dec 16 03:32:49.310862 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:53944.service - OpenSSH per-connection server daemon (10.0.0.1:53944). Dec 16 03:32:49.364322 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 53944 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:49.366222 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:49.370713 systemd-logind[1591]: New session 14 of user core. Dec 16 03:32:49.388093 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:32:49.461980 sshd[4002]: Connection closed by 10.0.0.1 port 53944 Dec 16 03:32:49.462291 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:49.467610 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:53944.service: Deactivated successfully. Dec 16 03:32:49.470273 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:32:49.471276 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:32:49.472700 systemd-logind[1591]: Removed session 14. Dec 16 03:32:54.476531 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:59198.service - OpenSSH per-connection server daemon (10.0.0.1:59198). Dec 16 03:32:54.540376 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 59198 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:54.542666 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:54.547161 systemd-logind[1591]: New session 15 of user core. Dec 16 03:32:54.557116 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:32:54.624620 sshd[4057]: Connection closed by 10.0.0.1 port 59198 Dec 16 03:32:54.624903 sshd-session[4053]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:54.638628 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:59198.service: Deactivated successfully. Dec 16 03:32:54.640494 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:32:54.641267 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:32:54.644508 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:59202.service - OpenSSH per-connection server daemon (10.0.0.1:59202). Dec 16 03:32:54.645327 systemd-logind[1591]: Removed session 15. Dec 16 03:32:54.698668 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 59202 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:54.700505 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:54.705316 systemd-logind[1591]: New session 16 of user core. Dec 16 03:32:54.715100 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:32:54.963616 sshd[4075]: Connection closed by 10.0.0.1 port 59202 Dec 16 03:32:54.963979 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:54.977025 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:59202.service: Deactivated successfully. Dec 16 03:32:54.979054 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:32:54.979855 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:32:54.982544 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:59216.service - OpenSSH per-connection server daemon (10.0.0.1:59216). Dec 16 03:32:54.983412 systemd-logind[1591]: Removed session 16. Dec 16 03:32:55.040101 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 59216 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:55.042041 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:55.046575 systemd-logind[1591]: New session 17 of user core. Dec 16 03:32:55.065099 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:32:55.458161 sshd[4090]: Connection closed by 10.0.0.1 port 59216 Dec 16 03:32:55.459666 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:55.471874 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:59216.service: Deactivated successfully. Dec 16 03:32:55.473905 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:32:55.475438 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:32:55.478859 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:59220.service - OpenSSH per-connection server daemon (10.0.0.1:59220). Dec 16 03:32:55.479790 systemd-logind[1591]: Removed session 17. Dec 16 03:32:55.540125 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 59220 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:55.542322 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:55.546971 systemd-logind[1591]: New session 18 of user core. Dec 16 03:32:55.555109 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:32:56.161511 sshd[4113]: Connection closed by 10.0.0.1 port 59220 Dec 16 03:32:56.164199 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:56.170987 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:59220.service: Deactivated successfully. Dec 16 03:32:56.173039 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:32:56.174661 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:32:56.177375 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:59224.service - OpenSSH per-connection server daemon (10.0.0.1:59224). Dec 16 03:32:56.178069 systemd-logind[1591]: Removed session 18. Dec 16 03:32:56.248046 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 59224 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:32:56.250294 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:56.255446 systemd-logind[1591]: New session 19 of user core. Dec 16 03:32:56.266116 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:32:56.337606 sshd[4129]: Connection closed by 10.0.0.1 port 59224 Dec 16 03:32:56.337910 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:56.343466 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:59224.service: Deactivated successfully. Dec 16 03:32:56.345771 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:32:56.346824 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:32:56.348551 systemd-logind[1591]: Removed session 19. Dec 16 03:33:01.354768 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:52942.service - OpenSSH per-connection server daemon (10.0.0.1:52942). Dec 16 03:33:01.408175 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 52942 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:33:01.409904 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:33:01.414450 systemd-logind[1591]: New session 20 of user core. Dec 16 03:33:01.425119 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:33:01.487620 sshd[4170]: Connection closed by 10.0.0.1 port 52942 Dec 16 03:33:01.487906 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Dec 16 03:33:01.492873 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:52942.service: Deactivated successfully. Dec 16 03:33:01.494911 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:33:01.495661 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:33:01.496723 systemd-logind[1591]: Removed session 20. Dec 16 03:33:06.501674 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:52944.service - OpenSSH per-connection server daemon (10.0.0.1:52944). Dec 16 03:33:06.560586 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 52944 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:33:06.562566 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:33:06.567398 systemd-logind[1591]: New session 21 of user core. Dec 16 03:33:06.575900 kubelet[2798]: E1216 03:33:06.575852 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:33:06.579425 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:33:06.663688 sshd[4209]: Connection closed by 10.0.0.1 port 52944 Dec 16 03:33:06.664152 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Dec 16 03:33:06.669317 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:52944.service: Deactivated successfully. Dec 16 03:33:06.671486 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:33:06.672311 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:33:06.673586 systemd-logind[1591]: Removed session 21. Dec 16 03:33:11.575825 kubelet[2798]: E1216 03:33:11.575782 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:33:11.687544 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:54860.service - OpenSSH per-connection server daemon (10.0.0.1:54860). Dec 16 03:33:11.731243 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 54860 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:33:11.733145 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:33:11.737826 systemd-logind[1591]: New session 22 of user core. Dec 16 03:33:11.746109 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:33:11.816439 sshd[4247]: Connection closed by 10.0.0.1 port 54860 Dec 16 03:33:11.816735 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Dec 16 03:33:11.821586 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:54860.service: Deactivated successfully. Dec 16 03:33:11.823790 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:33:11.824819 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:33:11.826309 systemd-logind[1591]: Removed session 22. Dec 16 03:33:16.833170 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). Dec 16 03:33:16.894574 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:33:16.896504 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:33:16.901267 systemd-logind[1591]: New session 23 of user core. Dec 16 03:33:16.910123 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:33:16.977013 sshd[4285]: Connection closed by 10.0.0.1 port 54866 Dec 16 03:33:16.977333 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Dec 16 03:33:16.982536 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:54866.service: Deactivated successfully. Dec 16 03:33:16.984574 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:33:16.985475 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:33:16.986703 systemd-logind[1591]: Removed session 23. Dec 16 03:33:18.575683 kubelet[2798]: E1216 03:33:18.575604 2798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"