Jan 20 02:30:59.603719 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:27:27 -00 2026 Jan 20 02:30:59.603771 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:30:59.603789 kernel: BIOS-provided physical RAM map: Jan 20 02:30:59.603799 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 02:30:59.603808 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 02:30:59.603883 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 02:30:59.603898 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 02:30:59.607680 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 02:30:59.607715 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 02:30:59.607732 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 02:30:59.607751 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:30:59.607762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 02:30:59.607772 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:30:59.607783 kernel: NX (Execute Disable) protection: active Jan 20 02:30:59.607796 kernel: APIC: Static calls initialized Jan 20 02:30:59.607811 kernel: SMBIOS 2.8 present. Jan 20 02:30:59.607879 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 02:30:59.607893 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:30:59.607929 kernel: Hypervisor detected: KVM Jan 20 02:30:59.607941 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:30:59.607952 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:30:59.607963 kernel: kvm-clock: using sched offset of 63625378931 cycles Jan 20 02:30:59.607976 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:30:59.607987 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:30:59.608006 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:30:59.608018 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:30:59.608030 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:30:59.608042 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 02:30:59.608054 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:30:59.608065 kernel: Using GB pages for direct mapping Jan 20 02:30:59.608077 kernel: ACPI: Early table checksum verification disabled Jan 20 02:30:59.608092 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 02:30:59.608104 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608116 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608127 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608139 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 02:30:59.608151 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608163 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608178 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608190 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:30:59.608207 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 02:30:59.608219 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 02:30:59.608231 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 02:30:59.608247 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 02:30:59.608259 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 02:30:59.608298 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 02:30:59.608310 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 02:30:59.608322 kernel: No NUMA configuration found Jan 20 02:30:59.608334 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 02:30:59.608350 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 02:30:59.608362 kernel: Zone ranges: Jan 20 02:30:59.608374 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:30:59.608386 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 02:30:59.608398 kernel: Normal empty Jan 20 02:30:59.608409 kernel: Device empty Jan 20 02:30:59.608421 kernel: Movable zone start for each node Jan 20 02:30:59.608433 kernel: Early memory node ranges Jan 20 02:30:59.608448 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 02:30:59.608460 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 02:30:59.608472 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 02:30:59.608484 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:30:59.608496 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 02:30:59.608526 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 02:30:59.608539 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:30:59.608554 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:30:59.608566 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:30:59.608578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:30:59.608607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:30:59.608619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:30:59.608631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:30:59.608643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:30:59.608655 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:30:59.608671 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:30:59.608683 kernel: TSC deadline timer available Jan 20 02:30:59.608695 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:30:59.608707 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:30:59.608719 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:30:59.608730 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:30:59.608742 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:30:59.608758 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:30:59.608770 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:30:59.608782 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:30:59.608794 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:30:59.608806 kernel: kvm-guest: setup PV sched yield Jan 20 02:30:59.608864 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 02:30:59.608879 kernel: Booting paravirtualized kernel on KVM Jan 20 02:30:59.608893 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:30:59.608914 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:30:59.608929 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:30:59.608943 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:30:59.608956 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:30:59.608968 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:30:59.608980 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:30:59.608993 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:30:59.609011 kernel: random: crng init done Jan 20 02:30:59.609023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:30:59.609035 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:30:59.609047 kernel: Fallback order for Node 0: 0 Jan 20 02:30:59.609059 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 02:30:59.609071 kernel: Policy zone: DMA32 Jan 20 02:30:59.609086 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:30:59.609098 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:30:59.609111 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:30:59.609122 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:30:59.609134 kernel: Dynamic Preempt: voluntary Jan 20 02:30:59.609147 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:30:59.609160 kernel: rcu: RCU event tracing is enabled. Jan 20 02:30:59.609173 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:30:59.609188 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:30:59.609223 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:30:59.609236 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:30:59.609247 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:30:59.609259 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:30:59.623716 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:30:59.623782 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:30:59.623809 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:30:59.623866 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:30:59.623879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:30:59.623918 kernel: Console: colour VGA+ 80x25 Jan 20 02:30:59.623933 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:30:59.623945 kernel: ACPI: Core revision 20240827 Jan 20 02:30:59.623956 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:30:59.623968 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:30:59.623979 kernel: x2apic enabled Jan 20 02:30:59.623994 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:30:59.624029 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:30:59.624043 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:30:59.624056 kernel: kvm-guest: setup PV IPIs Jan 20 02:30:59.624073 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:30:59.624086 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:30:59.624099 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:30:59.624112 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:30:59.624127 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:30:59.624142 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:30:59.624154 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:30:59.624172 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:30:59.624187 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:30:59.624200 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:30:59.624213 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:30:59.624227 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:30:59.624240 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:30:59.624253 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:30:59.632807 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:30:59.632866 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:30:59.632881 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:30:59.632894 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:30:59.634182 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:30:59.634202 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:30:59.634214 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:30:59.634236 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:30:59.634248 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:30:59.634294 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:30:59.634308 kernel: landlock: Up and running. Jan 20 02:30:59.634320 kernel: SELinux: Initializing. Jan 20 02:30:59.634332 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:30:59.634344 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:30:59.634383 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:30:59.634396 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:30:59.634408 kernel: signal: max sigframe size: 1776 Jan 20 02:30:59.634420 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:30:59.634433 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:30:59.634446 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:30:59.634458 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:30:59.634474 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:30:59.634486 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:30:59.634498 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:30:59.634510 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:30:59.634522 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:30:59.634536 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120520K reserved, 0K cma-reserved) Jan 20 02:30:59.634548 kernel: devtmpfs: initialized Jan 20 02:30:59.634563 kernel: x86/mm: Memory block size: 128MB Jan 20 02:30:59.634575 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:30:59.634587 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:30:59.634599 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:30:59.634611 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:30:59.634623 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:30:59.634635 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:30:59.634650 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:30:59.634662 kernel: audit: type=2000 audit(1768876227.479:1): state=initialized audit_enabled=0 res=1 Jan 20 02:30:59.634674 kernel: cpuidle: using governor menu Jan 20 02:30:59.634686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:30:59.634698 kernel: dca service started, version 1.12.1 Jan 20 02:30:59.634709 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 02:30:59.634722 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 02:30:59.634736 kernel: PCI: Using configuration type 1 for base access Jan 20 02:30:59.634748 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:30:59.634760 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:30:59.634772 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:30:59.634784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:30:59.634796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:30:59.634808 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:30:59.634859 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:30:59.634872 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:30:59.634884 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:30:59.634897 kernel: ACPI: Interpreter enabled Jan 20 02:30:59.634911 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:30:59.634926 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:30:59.634939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:30:59.634951 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:30:59.634967 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:30:59.634979 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:30:59.645667 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:30:59.646090 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:30:59.653390 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:30:59.653448 kernel: PCI host bridge to bus 0000:00 Jan 20 02:30:59.659531 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:30:59.659929 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:30:59.660246 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:30:59.667768 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 02:30:59.668140 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 02:30:59.674579 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 02:30:59.674935 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:30:59.677195 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:30:59.681975 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:30:59.686539 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 02:30:59.690568 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 02:30:59.690991 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 02:30:59.691362 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:30:59.691677 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 20 02:30:59.692080 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:30:59.702112 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 02:30:59.710024 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 02:30:59.712690 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 02:30:59.713098 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:30:59.719220 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 02:30:59.722924 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 02:30:59.727207 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 02:30:59.728721 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:30:59.729083 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 02:30:59.734552 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 02:30:59.734961 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 02:30:59.735537 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 02:30:59.736002 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:30:59.739723 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:30:59.740132 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 20 02:30:59.749041 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:30:59.753104 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 02:30:59.758645 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 02:30:59.759072 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:30:59.759420 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 02:30:59.759447 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:30:59.759460 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:30:59.759476 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:30:59.759489 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:30:59.759508 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:30:59.759522 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:30:59.759535 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:30:59.759549 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:30:59.759562 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:30:59.759577 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:30:59.759589 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:30:59.759606 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:30:59.759621 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:30:59.759634 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:30:59.759646 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:30:59.759658 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:30:59.759670 kernel: iommu: Default domain type: Translated Jan 20 02:30:59.759682 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:30:59.759694 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:30:59.759711 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:30:59.759723 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 02:30:59.759735 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 02:30:59.760124 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:30:59.767801 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:30:59.768212 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:30:59.776052 kernel: vgaarb: loaded Jan 20 02:30:59.776112 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:30:59.776130 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:30:59.776145 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:30:59.776157 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:30:59.776170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:30:59.776182 kernel: pnp: PnP ACPI init Jan 20 02:30:59.776627 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 02:30:59.776653 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:30:59.776667 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:30:59.776680 kernel: NET: Registered PF_INET protocol family Jan 20 02:30:59.776693 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:30:59.776705 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:30:59.776718 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:30:59.776741 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:30:59.776752 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:30:59.776766 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:30:59.776778 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:30:59.776791 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:30:59.776804 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:30:59.776861 kernel: NET: Registered PF_XDP protocol family Jan 20 02:30:59.777188 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:30:59.777495 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:30:59.777767 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:30:59.778101 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 02:30:59.791959 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 02:30:59.800898 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 02:30:59.800965 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:30:59.800984 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:30:59.800998 kernel: Initialise system trusted keyrings Jan 20 02:30:59.801010 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:30:59.801023 kernel: Key type asymmetric registered Jan 20 02:30:59.801036 kernel: Asymmetric key parser 'x509' registered Jan 20 02:30:59.801048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:30:59.801060 kernel: io scheduler mq-deadline registered Jan 20 02:30:59.801081 kernel: io scheduler kyber registered Jan 20 02:30:59.801093 kernel: io scheduler bfq registered Jan 20 02:30:59.801105 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:30:59.801119 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:30:59.801134 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:30:59.801145 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:30:59.801157 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:30:59.801177 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:30:59.801188 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:30:59.801201 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:30:59.801214 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:30:59.801559 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:30:59.801579 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:30:59.811720 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:30:59.819641 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:30:45 UTC (1768876245) Jan 20 02:30:59.820061 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 02:30:59.820092 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:30:59.820106 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:30:59.820119 kernel: Segment Routing with IPv6 Jan 20 02:30:59.820130 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:30:59.820157 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:30:59.820170 kernel: Key type dns_resolver registered Jan 20 02:30:59.820183 kernel: IPI shorthand broadcast: enabled Jan 20 02:30:59.820195 kernel: sched_clock: Marking stable (13340346838, 2301229172)->(18753067250, -3111491240) Jan 20 02:30:59.825969 kernel: registered taskstats version 1 Jan 20 02:30:59.826012 kernel: Loading compiled-in X.509 certificates Jan 20 02:30:59.826027 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 39f154fc6e329874bced8cdae9473f98b7dd3f43' Jan 20 02:30:59.826051 kernel: Demotion targets for Node 0: null Jan 20 02:30:59.826064 kernel: Key type .fscrypt registered Jan 20 02:30:59.826077 kernel: Key type fscrypt-provisioning registered Jan 20 02:30:59.826090 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:30:59.826102 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:30:59.826115 kernel: ima: No architecture policies found Jan 20 02:30:59.826128 kernel: clk: Disabling unused clocks Jan 20 02:30:59.826144 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 20 02:30:59.826157 kernel: Write protecting the kernel read-only data: 47104k Jan 20 02:30:59.826170 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 20 02:30:59.826182 kernel: Run /init as init process Jan 20 02:30:59.826196 kernel: with arguments: Jan 20 02:30:59.826209 kernel: /init Jan 20 02:30:59.826221 kernel: with environment: Jan 20 02:30:59.826234 kernel: HOME=/ Jan 20 02:30:59.826250 kernel: TERM=linux Jan 20 02:30:59.826290 kernel: SCSI subsystem initialized Jan 20 02:30:59.826305 kernel: libata version 3.00 loaded. Jan 20 02:30:59.826683 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:30:59.826708 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:30:59.827045 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:30:59.843210 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:30:59.843668 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:30:59.852152 kernel: scsi host0: ahci Jan 20 02:30:59.852621 kernel: scsi host1: ahci Jan 20 02:30:59.853054 kernel: scsi host2: ahci Jan 20 02:30:59.857802 kernel: scsi host3: ahci Jan 20 02:30:59.867484 kernel: scsi host4: ahci Jan 20 02:30:59.868041 kernel: scsi host5: ahci Jan 20 02:30:59.871181 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 20 02:30:59.871199 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 20 02:30:59.871213 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 20 02:30:59.871237 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 20 02:30:59.871251 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 20 02:30:59.871961 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 20 02:30:59.871984 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:30:59.871999 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:30:59.872010 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:30:59.872022 kernel: hrtimer: interrupt took 28427932 ns Jan 20 02:30:59.872046 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:30:59.872059 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:30:59.872070 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:30:59.872082 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:30:59.872095 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:30:59.872107 kernel: ata3.00: applying bridge limits Jan 20 02:30:59.872119 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:30:59.872131 kernel: ata3.00: configured for UDMA/100 Jan 20 02:30:59.874602 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:30:59.874977 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:30:59.875290 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 02:30:59.875311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:30:59.875633 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:30:59.875659 kernel: GPT:16515071 != 27000831 Jan 20 02:30:59.875672 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:30:59.875685 kernel: GPT:16515071 != 27000831 Jan 20 02:30:59.875697 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:30:59.875709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:30:59.875721 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:30:59.876080 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:30:59.876106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:30:59.876119 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:30:59.876132 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:30:59.876145 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 02:30:59.876157 kernel: raid6: avx2x4 gen() 5815 MB/s Jan 20 02:30:59.876170 kernel: raid6: avx2x2 gen() 8332 MB/s Jan 20 02:30:59.876182 kernel: raid6: avx2x1 gen() 5992 MB/s Jan 20 02:30:59.876198 kernel: raid6: using algorithm avx2x2 gen() 8332 MB/s Jan 20 02:30:59.876210 kernel: raid6: .... xor() 3854 MB/s, rmw enabled Jan 20 02:30:59.876223 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:30:59.876235 kernel: xor: automatically using best checksumming function avx Jan 20 02:30:59.876252 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:30:59.880762 kernel: BTRFS: device fsid 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Jan 20 02:30:59.880795 kernel: BTRFS info (device dm-0): first mount of filesystem 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 Jan 20 02:30:59.880810 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:30:59.880861 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:30:59.880874 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:30:59.880887 kernel: loop: module loaded Jan 20 02:30:59.880902 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 02:30:59.880920 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:30:59.880935 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:30:59.880953 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:30:59.880967 systemd[1]: Detected virtualization kvm. Jan 20 02:30:59.880980 systemd[1]: Detected architecture x86-64. Jan 20 02:30:59.880994 systemd[1]: Running in initrd. Jan 20 02:30:59.881013 systemd[1]: No hostname configured, using default hostname. Jan 20 02:30:59.881027 systemd[1]: Hostname set to . Jan 20 02:30:59.881040 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:30:59.881054 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:30:59.881067 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:30:59.881081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:30:59.881098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:30:59.881113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:30:59.881127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:30:59.881142 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:30:59.881156 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:30:59.881170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:30:59.881187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:30:59.881201 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:30:59.881215 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:30:59.881228 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:30:59.881243 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:30:59.881256 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:30:59.881302 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:30:59.881325 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:30:59.881338 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:30:59.881351 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:30:59.881364 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:30:59.881377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:30:59.881389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:30:59.881406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:30:59.881418 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:30:59.881432 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:30:59.881444 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:30:59.881457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:30:59.881470 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:30:59.881487 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:30:59.881504 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:30:59.881517 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:30:59.881530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:30:59.881543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:30:59.881559 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:30:59.881572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:30:59.881585 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:30:59.881598 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:30:59.881696 systemd-journald[320]: Collecting audit messages is enabled. Jan 20 02:30:59.881732 systemd-journald[320]: Journal started Jan 20 02:30:59.881757 systemd-journald[320]: Runtime Journal (/run/log/journal/34c54a5ac8524ddda3629c3df9a67f2e) is 6M, max 48.2M, 42.1M free. Jan 20 02:30:59.885176 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:31:00.113353 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:31:00.241355 kernel: Bridge firewalling registered Jan 20 02:31:00.242108 kernel: audit: type=1130 audit(1768876260.237:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.254706 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 02:31:00.268856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:31:00.329258 kernel: audit: type=1130 audit(1768876260.286:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.329337 kernel: audit: type=1130 audit(1768876260.299:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.288753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:31:00.336596 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:31:00.367680 kernel: audit: type=1130 audit(1768876260.336:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.366638 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:31:00.379614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:31:00.416626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:31:00.426463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:31:00.551102 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:31:00.583919 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:31:00.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.613030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:31:00.690758 kernel: audit: type=1130 audit(1768876260.582:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.690810 kernel: audit: type=1130 audit(1768876260.635:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.674224 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:31:00.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.736779 kernel: audit: type=1130 audit(1768876260.717:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.738576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:31:00.790355 kernel: audit: type=1130 audit(1768876260.750:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.796543 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:31:00.853746 kernel: audit: type=1334 audit(1768876260.839:10): prog-id=6 op=LOAD Jan 20 02:31:00.839000 audit: BPF prog-id=6 op=LOAD Jan 20 02:31:00.848643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:31:01.013625 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:31:01.202764 systemd-resolved[356]: Positive Trust Anchors: Jan 20 02:31:01.203152 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:31:01.203158 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:31:01.203201 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:31:01.403108 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 20 02:31:01.446597 kernel: audit: type=1130 audit(1768876261.422:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:01.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:01.419933 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:31:01.424762 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:31:01.742777 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:31:01.807132 kernel: iscsi: registered transport (tcp) Jan 20 02:31:01.898032 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:31:01.898124 kernel: QLogic iSCSI HBA Driver Jan 20 02:31:02.286161 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:31:02.476404 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:31:02.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:02.582367 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:31:03.069273 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:31:03.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:03.106724 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:31:03.164796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:31:03.534406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:31:03.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:03.598000 audit: BPF prog-id=7 op=LOAD Jan 20 02:31:03.598000 audit: BPF prog-id=8 op=LOAD Jan 20 02:31:03.615650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:31:03.924571 systemd-udevd[591]: Using default interface naming scheme 'v257'. Jan 20 02:31:04.057977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:31:04.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:04.103296 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:31:04.306293 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Jan 20 02:31:04.452890 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:31:04.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:04.520000 audit: BPF prog-id=9 op=LOAD Jan 20 02:31:04.529800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:31:04.807308 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:31:04.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:04.879779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:31:05.563960 systemd-networkd[713]: lo: Link UP Jan 20 02:31:05.563970 systemd-networkd[713]: lo: Gained carrier Jan 20 02:31:05.584594 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:31:05.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:05.636205 systemd[1]: Reached target network.target - Network. Jan 20 02:31:05.747656 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 20 02:31:05.751061 kernel: audit: type=1130 audit(1768876265.630:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:06.196618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:31:06.316500 kernel: audit: type=1130 audit(1768876266.224:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:06.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:06.279329 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:31:07.200464 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:31:07.276441 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:31:07.435001 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:31:07.581440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:31:07.650022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:31:07.910273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:31:08.043148 kernel: audit: type=1131 audit(1768876267.932:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:07.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:07.920715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:31:07.934267 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:31:08.144448 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:31:08.213773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:31:08.509252 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 02:31:08.528902 disk-uuid[773]: Primary Header is updated. Jan 20 02:31:08.528902 disk-uuid[773]: Secondary Entries is updated. Jan 20 02:31:08.528902 disk-uuid[773]: Secondary Header is updated. Jan 20 02:31:09.302926 kernel: AES CTR mode by8 optimization enabled Jan 20 02:31:09.325024 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:31:09.562901 kernel: audit: type=1130 audit(1768876269.496:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:09.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:09.325043 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:31:09.334108 systemd-networkd[713]: eth0: Link UP Jan 20 02:31:09.337525 systemd-networkd[713]: eth0: Gained carrier Jan 20 02:31:09.337550 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:31:09.395661 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:31:09.496007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:31:09.875044 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:31:09.934057 kernel: audit: type=1130 audit(1768876269.879:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:09.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:09.880722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:31:09.912362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:31:09.952968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:31:10.009083 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:31:10.068149 disk-uuid[775]: Warning: The kernel is still using the old partition table. Jan 20 02:31:10.068149 disk-uuid[775]: The new table will be used at the next reboot or after you Jan 20 02:31:10.068149 disk-uuid[775]: run partprobe(8) or kpartx(8) Jan 20 02:31:10.068149 disk-uuid[775]: The operation has completed successfully. Jan 20 02:31:10.095970 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:31:10.106300 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:31:10.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.166876 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:31:10.213974 kernel: audit: type=1130 audit(1768876270.141:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.214041 kernel: audit: type=1131 audit(1768876270.141:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.301498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:31:10.398191 kernel: audit: type=1130 audit(1768876270.322:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.509914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 20 02:31:10.535073 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:31:10.535208 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:31:10.612019 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:31:10.612141 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:31:10.674071 systemd-networkd[713]: eth0: Gained IPv6LL Jan 20 02:31:10.693654 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:31:10.737866 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:31:10.818798 kernel: audit: type=1130 audit(1768876270.750:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:10.772686 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:31:12.660508 ignition[882]: Ignition 2.24.0 Jan 20 02:31:12.660545 ignition[882]: Stage: fetch-offline Jan 20 02:31:12.660632 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:31:12.660655 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:31:12.660954 ignition[882]: parsed url from cmdline: "" Jan 20 02:31:12.660962 ignition[882]: no config URL provided Jan 20 02:31:12.661209 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:31:12.661236 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:31:12.667003 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 20 02:31:12.667016 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:31:12.827722 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 20 02:31:13.163175 ignition[882]: parsing config with SHA512: 196d8a30de4c8d2cf6858914d02e52d3ed703a836aef533e16ac93ec3d82f1af0ced1147a1e85fc52ac542aa167b8e57397607ee0457330f73b3312679604574 Jan 20 02:31:13.198589 unknown[882]: fetched base config from "system" Jan 20 02:31:13.199226 ignition[882]: fetch-offline: fetch-offline passed Jan 20 02:31:13.198629 unknown[882]: fetched user config from "qemu" Jan 20 02:31:13.199348 ignition[882]: Ignition finished successfully Jan 20 02:31:13.241066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:31:13.328143 kernel: audit: type=1130 audit(1768876273.262:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:13.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:13.271152 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:31:13.280055 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:31:13.683366 ignition[892]: Ignition 2.24.0 Jan 20 02:31:13.683382 ignition[892]: Stage: kargs Jan 20 02:31:13.702621 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:31:13.702642 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:31:13.710781 ignition[892]: kargs: kargs passed Jan 20 02:31:13.711017 ignition[892]: Ignition finished successfully Jan 20 02:31:13.781055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:31:13.792403 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:31:13.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:13.830091 kernel: audit: type=1130 audit(1768876273.787:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:14.107339 ignition[900]: Ignition 2.24.0 Jan 20 02:31:14.109157 ignition[900]: Stage: disks Jan 20 02:31:14.109479 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:31:14.109498 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:31:14.116598 ignition[900]: disks: disks passed Jan 20 02:31:14.116700 ignition[900]: Ignition finished successfully Jan 20 02:31:14.139761 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:31:14.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:14.188999 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:31:14.264195 kernel: audit: type=1130 audit(1768876274.181:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:14.209038 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:31:14.218060 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:31:14.231727 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:31:14.241524 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:31:14.282065 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:31:14.657922 systemd-fsck[909]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 02:31:14.694951 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:31:14.784862 kernel: audit: type=1130 audit(1768876274.718:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:14.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:14.744738 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:31:15.601232 kernel: EXT4-fs (vda9): mounted filesystem 452c2147-bc43-4f48-ad5f-dc139dd95c0b r/w with ordered data mode. Quota mode: none. Jan 20 02:31:15.609232 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:31:15.657556 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:31:15.689036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:31:15.759422 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:31:15.822593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Jan 20 02:31:15.779263 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:31:15.869343 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:31:15.869425 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:31:15.779429 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:31:15.779533 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:31:15.897058 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:31:15.979017 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:31:15.979058 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:31:15.985412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:31:16.038967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:31:18.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:18.376582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:31:18.457287 kernel: audit: type=1130 audit(1768876278.407:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:18.426874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:31:18.498952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:31:18.633915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:31:18.665103 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:31:19.299956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:31:19.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:19.399217 ignition[1014]: INFO : Ignition 2.24.0 Jan 20 02:31:19.399217 ignition[1014]: INFO : Stage: mount Jan 20 02:31:19.399217 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:31:19.399217 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:31:19.527465 kernel: audit: type=1130 audit(1768876279.356:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:19.527543 kernel: audit: type=1130 audit(1768876279.426:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:19.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:19.420444 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:31:19.647254 ignition[1014]: INFO : mount: mount passed Jan 20 02:31:19.647254 ignition[1014]: INFO : Ignition finished successfully Jan 20 02:31:19.448961 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:31:19.753316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:31:19.991979 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Jan 20 02:31:20.013044 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:31:20.013138 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:31:20.097794 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:31:20.097928 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:31:20.132213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:31:20.675503 ignition[1045]: INFO : Ignition 2.24.0 Jan 20 02:31:20.686894 ignition[1045]: INFO : Stage: files Jan 20 02:31:20.686894 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:31:20.686894 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:31:20.686894 ignition[1045]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:31:20.718571 ignition[1045]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:31:20.718571 ignition[1045]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:31:20.731387 ignition[1045]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:31:20.731387 ignition[1045]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:31:20.743712 ignition[1045]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:31:20.734904 unknown[1045]: wrote ssh authorized keys file for user: core Jan 20 02:31:20.756074 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:31:20.756074 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 02:31:21.120430 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:31:22.083180 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:31:22.083180 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:31:22.083180 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:31:22.083180 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:31:22.233273 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 02:31:22.797292 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 02:31:32.482605 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:31:32.482605 ignition[1045]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 02:31:32.577519 ignition[1045]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 02:31:32.671777 ignition[1045]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:31:33.429956 ignition[1045]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:31:33.739566 ignition[1045]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:31:33.868047 kernel: audit: type=1130 audit(1768876293.783:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:33.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:33.868246 ignition[1045]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:31:33.868246 ignition[1045]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:31:33.868246 ignition[1045]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:31:33.868246 ignition[1045]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:31:33.868246 ignition[1045]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:31:33.868246 ignition[1045]: INFO : files: files passed Jan 20 02:31:33.868246 ignition[1045]: INFO : Ignition finished successfully Jan 20 02:31:33.777103 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:31:33.813106 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:31:34.125356 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:31:34.492652 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:31:34.579187 kernel: audit: type=1130 audit(1768876294.513:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:34.579264 kernel: audit: type=1131 audit(1768876294.513:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:34.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:34.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:34.493043 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:31:34.619573 initrd-setup-root-after-ignition[1075]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:31:34.682742 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:31:34.682742 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:31:34.786651 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:31:34.835499 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:31:34.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:35.055377 kernel: audit: type=1130 audit(1768876294.997:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:35.052621 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:31:35.090159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:31:35.658122 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:31:35.658439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:31:35.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:35.759500 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:31:35.889158 kernel: audit: type=1130 audit(1768876295.756:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:35.889206 kernel: audit: type=1131 audit(1768876295.756:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:35.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:35.850604 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:31:35.947001 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:31:35.986105 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:31:36.451335 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:31:36.536282 kernel: audit: type=1130 audit(1768876296.470:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:36.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:36.521524 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:31:36.730398 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:31:36.733246 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:31:36.816168 kernel: audit: type=1131 audit(1768876296.747:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:36.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:36.747319 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:31:36.747524 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:31:36.747738 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:31:36.748084 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:31:36.859663 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:31:36.877089 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:31:36.882144 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:31:36.895119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:31:36.911688 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:31:36.916067 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:31:36.944072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:31:36.957249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:31:37.135516 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:31:37.157258 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:31:37.208669 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:31:37.238204 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:31:37.284232 kernel: audit: type=1131 audit(1768876297.248:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.238570 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:31:37.249587 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:31:37.277285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:31:37.284110 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:31:37.286754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:31:37.322149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:31:37.372382 kernel: audit: type=1131 audit(1768876297.334:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.322397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:31:37.336287 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:31:37.345521 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:31:37.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.453472 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:31:37.457278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:31:37.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.457597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:31:37.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.466674 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:31:37.477318 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:31:37.494355 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:31:37.494505 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:31:37.503231 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:31:37.503390 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:31:37.532268 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 02:31:37.532425 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:31:37.544964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:31:37.545201 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:31:37.574871 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:31:37.575126 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:31:37.610045 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:31:37.849172 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:31:37.983086 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:31:37.985419 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:31:38.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.089641 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:31:38.090530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:31:38.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.184496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:31:38.213305 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:31:38.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.366664 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:31:38.452022 ignition[1101]: INFO : Ignition 2.24.0 Jan 20 02:31:38.452022 ignition[1101]: INFO : Stage: umount Jan 20 02:31:38.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.611581 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:31:38.611581 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:31:38.611581 ignition[1101]: INFO : umount: umount passed Jan 20 02:31:38.611581 ignition[1101]: INFO : Ignition finished successfully Jan 20 02:31:38.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.455490 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:31:38.455699 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:31:38.589908 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:31:38.590139 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:31:38.601640 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:31:38.949522 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 20 02:31:38.949574 kernel: audit: type=1131 audit(1768876298.834:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.949618 kernel: audit: type=1131 audit(1768876298.867:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:38.602135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:31:38.674101 systemd[1]: Stopped target network.target - Network. Jan 20 02:31:38.675524 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:31:38.675665 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:31:38.683071 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:31:38.683191 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:31:38.835374 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:31:38.835533 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:31:38.868130 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:31:38.868282 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:31:39.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.084067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:31:39.150427 kernel: audit: type=1131 audit(1768876299.073:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.087032 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:31:39.199268 kernel: audit: type=1131 audit(1768876299.154:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.158615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:31:39.252161 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:31:39.305674 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:31:39.311451 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:31:39.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.399366 kernel: audit: type=1131 audit(1768876299.380:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.412974 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:31:39.414483 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:31:39.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.499399 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:31:39.576217 kernel: audit: type=1131 audit(1768876299.471:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.576274 kernel: audit: type=1334 audit(1768876299.495:64): prog-id=9 op=UNLOAD Jan 20 02:31:39.576296 kernel: audit: type=1334 audit(1768876299.503:65): prog-id=6 op=UNLOAD Jan 20 02:31:39.495000 audit: BPF prog-id=9 op=UNLOAD Jan 20 02:31:39.503000 audit: BPF prog-id=6 op=UNLOAD Jan 20 02:31:39.562474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:31:39.562608 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:31:39.658301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:31:39.676233 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:31:39.769198 kernel: audit: type=1131 audit(1768876299.724:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.676389 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:31:39.768108 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:31:39.768254 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:31:39.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.894623 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:31:40.051144 kernel: audit: type=1131 audit(1768876299.881:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:39.894810 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:31:40.040555 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:31:40.146275 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:31:40.149879 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:31:40.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.210129 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:31:40.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.210315 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:31:40.227348 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:31:40.227444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:31:40.235350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:31:40.235469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:31:40.264767 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:31:40.265428 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:31:40.387681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:31:40.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.387967 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:31:40.461599 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:31:40.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.544937 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:31:40.545116 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:31:40.545346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:31:40.545441 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:31:40.545557 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 02:31:40.545624 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:31:40.545718 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:31:40.545867 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:31:40.545964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:31:40.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:40.546030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:31:40.857648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:31:40.859242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:31:41.045445 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:31:41.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:41.051135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:31:41.083381 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:31:41.195018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:31:41.337645 systemd[1]: Switching root. Jan 20 02:31:41.680590 systemd-journald[320]: Journal stopped Jan 20 02:31:52.957794 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 20 02:31:52.957980 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:31:52.958009 kernel: SELinux: policy capability open_perms=1 Jan 20 02:31:52.958027 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:31:52.958050 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:31:52.958073 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:31:52.958090 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:31:52.958110 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:31:52.958133 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:31:52.958149 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:31:52.958167 systemd[1]: Successfully loaded SELinux policy in 348.757ms. Jan 20 02:31:52.958206 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 74.188ms. Jan 20 02:31:52.958226 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:31:52.958244 systemd[1]: Detected virtualization kvm. Jan 20 02:31:52.958262 systemd[1]: Detected architecture x86-64. Jan 20 02:31:52.958279 systemd[1]: Detected first boot. Jan 20 02:31:52.958297 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:31:52.958321 zram_generator::config[1145]: No configuration found. Jan 20 02:31:52.958373 kernel: Guest personality initialized and is inactive Jan 20 02:31:52.958392 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:31:52.958408 kernel: Initialized host personality Jan 20 02:31:52.958424 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:31:52.958441 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:31:52.958460 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 20 02:31:52.958482 kernel: audit: type=1334 audit(1768876308.421:87): prog-id=12 op=LOAD Jan 20 02:31:52.958551 kernel: audit: type=1334 audit(1768876308.424:88): prog-id=3 op=UNLOAD Jan 20 02:31:52.958574 kernel: audit: type=1334 audit(1768876308.432:89): prog-id=13 op=LOAD Jan 20 02:31:52.958592 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:31:52.958614 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:31:52.958632 kernel: audit: type=1334 audit(1768876308.432:90): prog-id=14 op=LOAD Jan 20 02:31:52.968869 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:31:52.968953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:31:52.969023 kernel: audit: type=1334 audit(1768876308.432:91): prog-id=4 op=UNLOAD Jan 20 02:31:52.969043 kernel: audit: type=1334 audit(1768876308.432:92): prog-id=5 op=UNLOAD Jan 20 02:31:52.969061 kernel: audit: type=1131 audit(1768876308.451:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:52.969078 kernel: audit: type=1130 audit(1768876308.531:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:52.969096 kernel: audit: type=1131 audit(1768876308.531:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:52.969113 kernel: audit: type=1334 audit(1768876308.772:96): prog-id=12 op=UNLOAD Jan 20 02:31:52.969156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:31:52.969175 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:31:52.969195 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:31:52.969212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:31:52.969246 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:31:52.969287 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:31:52.969306 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:31:52.969323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:31:52.969341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:31:52.969359 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:31:52.969378 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:31:52.969396 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:31:52.969441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:31:52.969460 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:31:52.969481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:31:52.969523 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:31:52.969542 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:31:52.969566 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:31:52.969584 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:31:52.969626 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:31:52.969671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:31:52.969690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:31:52.969708 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 02:31:52.969725 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:31:52.969742 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:31:52.969760 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:31:52.969778 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:31:52.969867 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:31:52.979213 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:31:52.979295 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 02:31:52.979314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:31:52.979333 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 02:31:52.979351 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 02:31:52.979374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:31:52.979433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:31:52.979453 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:31:52.979474 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:31:52.979493 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:31:52.979512 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:31:52.979529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:31:52.979551 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:31:52.979613 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:31:52.979634 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:31:52.979655 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:31:52.979672 systemd[1]: Reached target machines.target - Containers. Jan 20 02:31:52.979691 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:31:52.979709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:31:52.979753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:31:52.979772 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:31:52.979789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:31:52.979807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:31:52.979867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:31:52.980955 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:31:52.980980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:31:52.981034 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:31:52.981054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:31:52.981072 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:31:52.981090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:31:52.981107 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:31:52.981126 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:31:52.981144 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:31:52.981186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:31:52.981205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:31:52.981222 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:31:52.981241 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:31:52.981296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:31:52.981315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:31:52.981333 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:31:52.984505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:31:52.984654 systemd-journald[1220]: Collecting audit messages is enabled. Jan 20 02:31:52.984693 kernel: fuse: init (API version 7.41) Jan 20 02:31:52.984749 systemd-journald[1220]: Journal started Jan 20 02:31:52.984782 systemd-journald[1220]: Runtime Journal (/run/log/journal/34c54a5ac8524ddda3629c3df9a67f2e) is 6M, max 48.2M, 42.1M free. Jan 20 02:31:50.760000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 02:31:52.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:52.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:52.355000 audit: BPF prog-id=14 op=UNLOAD Jan 20 02:31:52.355000 audit: BPF prog-id=13 op=UNLOAD Jan 20 02:31:52.384000 audit: BPF prog-id=15 op=LOAD Jan 20 02:31:52.424000 audit: BPF prog-id=16 op=LOAD Jan 20 02:31:52.481000 audit: BPF prog-id=17 op=LOAD Jan 20 02:31:52.950000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 02:31:52.950000 audit[1220]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe909a5c00 a2=4000 a3=0 items=0 ppid=1 pid=1220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:52.950000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 02:31:48.368591 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:31:53.076026 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:31:48.434660 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:31:48.445511 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:31:48.452641 systemd[1]: systemd-journald.service: Consumed 2.470s CPU time. Jan 20 02:31:53.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.261421 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:31:53.294266 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:31:53.324448 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:31:53.351248 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:31:53.381359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:31:53.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.422415 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:31:53.483645 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 20 02:31:53.485300 kernel: audit: type=1130 audit(1768876313.436:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.471533 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:31:53.472202 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:31:53.531941 kernel: ACPI: bus type drm_connector registered Jan 20 02:31:53.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.542760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:31:53.547910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:31:53.561665 kernel: audit: type=1130 audit(1768876313.540:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.561773 kernel: audit: type=1131 audit(1768876313.540:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.625908 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:31:53.626298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:31:53.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.685275 kernel: audit: type=1130 audit(1768876313.623:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.685391 kernel: audit: type=1131 audit(1768876313.623:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.706967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:31:53.707327 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:31:53.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.739017 kernel: audit: type=1130 audit(1768876313.704:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.739116 kernel: audit: type=1131 audit(1768876313.704:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.772318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:31:53.772674 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:31:53.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.816010 kernel: audit: type=1130 audit(1768876313.764:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.816273 kernel: audit: type=1131 audit(1768876313.765:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.891678 kernel: audit: type=1130 audit(1768876313.847:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:53.992210 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:31:53.998501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:31:54.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.024013 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:31:54.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.034471 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:31:54.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.109735 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:31:54.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.154915 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:31:54.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.263626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:31:54.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.302611 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:31:54.348190 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 02:31:54.379124 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:31:54.415303 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:31:54.458351 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:31:54.458455 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:31:54.480615 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:31:54.508090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:31:54.508350 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:31:54.541755 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:31:54.594344 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:31:54.632155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:31:54.712055 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:31:54.740510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:31:54.763032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:31:54.792586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:31:54.798483 systemd-journald[1220]: Time spent on flushing to /var/log/journal/34c54a5ac8524ddda3629c3df9a67f2e is 565.725ms for 1146 entries. Jan 20 02:31:54.798483 systemd-journald[1220]: System Journal (/var/log/journal/34c54a5ac8524ddda3629c3df9a67f2e) is 8M, max 163.5M, 155.5M free. Jan 20 02:31:55.959658 systemd-journald[1220]: Received client request to flush runtime journal. Jan 20 02:31:55.959758 kernel: loop1: detected capacity change from 0 to 111560 Jan 20 02:31:55.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:54.839243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:31:55.147242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:31:55.463223 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:31:55.571977 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:31:55.790811 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:31:55.915397 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:31:55.995134 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:31:56.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:56.470062 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:31:56.489984 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:31:56.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:56.562992 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 20 02:31:56.563056 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 20 02:31:56.617444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:31:56.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:56.702610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:31:57.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:57.077975 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 02:31:57.224723 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:31:57.501927 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 02:31:57.850999 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:31:57.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:57.886147 kernel: loop4: detected capacity change from 0 to 111560 Jan 20 02:31:57.900000 audit: BPF prog-id=18 op=LOAD Jan 20 02:31:57.901000 audit: BPF prog-id=19 op=LOAD Jan 20 02:31:57.903000 audit: BPF prog-id=20 op=LOAD Jan 20 02:31:57.910428 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 02:31:57.933000 audit: BPF prog-id=21 op=LOAD Jan 20 02:31:57.945427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:31:57.983358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:31:58.030973 kernel: loop5: detected capacity change from 0 to 50784 Jan 20 02:31:58.036000 audit: BPF prog-id=22 op=LOAD Jan 20 02:31:58.036000 audit: BPF prog-id=23 op=LOAD Jan 20 02:31:58.037000 audit: BPF prog-id=24 op=LOAD Jan 20 02:31:58.100000 audit: BPF prog-id=25 op=LOAD Jan 20 02:31:58.062349 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 02:31:58.146000 audit: BPF prog-id=26 op=LOAD Jan 20 02:31:58.150000 audit: BPF prog-id=27 op=LOAD Jan 20 02:31:58.156332 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:31:58.214106 kernel: loop6: detected capacity change from 0 to 224512 Jan 20 02:31:58.462937 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 20 02:31:58.462979 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 20 02:31:58.492696 (sd-merge)[1288]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 02:31:58.558901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:31:58.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:58.608034 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 20 02:31:58.608154 kernel: audit: type=1130 audit(1768876318.594:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:58.614328 (sd-merge)[1288]: Merged extensions into '/usr'. Jan 20 02:31:58.847680 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:31:58.847703 systemd[1]: Reloading... Jan 20 02:31:59.075465 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 02:32:00.000901 zram_generator::config[1333]: No configuration found. Jan 20 02:32:00.684385 systemd-oomd[1290]: No swap; memory pressure usage will be degraded Jan 20 02:32:01.336462 systemd-resolved[1291]: Positive Trust Anchors: Jan 20 02:32:01.337395 systemd-resolved[1291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:32:01.337538 systemd-resolved[1291]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:32:01.338479 systemd-resolved[1291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:32:01.370706 systemd-resolved[1291]: Defaulting to hostname 'linux'. Jan 20 02:32:02.229700 systemd[1]: Reloading finished in 3371 ms. Jan 20 02:32:02.338303 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 02:32:02.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.364564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:32:02.417007 kernel: audit: type=1130 audit(1768876322.359:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.429087 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 02:32:02.459041 kernel: audit: type=1130 audit(1768876322.414:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.752431 kernel: audit: type=1130 audit(1768876322.711:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.763106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:32:02.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.790944 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:32:02.822308 kernel: audit: type=1130 audit(1768876322.781:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.858088 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:32:02.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.969457 kernel: audit: type=1130 audit(1768876322.853:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.969569 kernel: audit: type=1130 audit(1768876322.893:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:02.963225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:32:03.070460 systemd[1]: Starting ensure-sysext.service... Jan 20 02:32:03.131525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:32:03.169000 audit: BPF prog-id=8 op=UNLOAD Jan 20 02:32:03.227141 kernel: audit: type=1334 audit(1768876323.169:149): prog-id=8 op=UNLOAD Jan 20 02:32:03.234156 kernel: audit: type=1334 audit(1768876323.185:150): prog-id=7 op=UNLOAD Jan 20 02:32:03.185000 audit: BPF prog-id=7 op=UNLOAD Jan 20 02:32:03.209747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:32:03.187000 audit: BPF prog-id=28 op=LOAD Jan 20 02:32:03.267170 kernel: audit: type=1334 audit(1768876323.187:151): prog-id=28 op=LOAD Jan 20 02:32:03.187000 audit: BPF prog-id=29 op=LOAD Jan 20 02:32:03.294000 audit: BPF prog-id=30 op=LOAD Jan 20 02:32:03.306000 audit: BPF prog-id=15 op=UNLOAD Jan 20 02:32:03.306000 audit: BPF prog-id=31 op=LOAD Jan 20 02:32:03.306000 audit: BPF prog-id=32 op=LOAD Jan 20 02:32:03.306000 audit: BPF prog-id=16 op=UNLOAD Jan 20 02:32:03.306000 audit: BPF prog-id=17 op=UNLOAD Jan 20 02:32:03.310000 audit: BPF prog-id=33 op=LOAD Jan 20 02:32:03.310000 audit: BPF prog-id=18 op=UNLOAD Jan 20 02:32:03.310000 audit: BPF prog-id=34 op=LOAD Jan 20 02:32:03.310000 audit: BPF prog-id=35 op=LOAD Jan 20 02:32:03.310000 audit: BPF prog-id=19 op=UNLOAD Jan 20 02:32:03.310000 audit: BPF prog-id=20 op=UNLOAD Jan 20 02:32:03.365000 audit: BPF prog-id=36 op=LOAD Jan 20 02:32:03.365000 audit: BPF prog-id=25 op=UNLOAD Jan 20 02:32:03.365000 audit: BPF prog-id=37 op=LOAD Jan 20 02:32:03.372000 audit: BPF prog-id=38 op=LOAD Jan 20 02:32:03.372000 audit: BPF prog-id=26 op=UNLOAD Jan 20 02:32:03.372000 audit: BPF prog-id=27 op=UNLOAD Jan 20 02:32:03.372000 audit: BPF prog-id=39 op=LOAD Jan 20 02:32:03.372000 audit: BPF prog-id=21 op=UNLOAD Jan 20 02:32:03.404000 audit: BPF prog-id=40 op=LOAD Jan 20 02:32:03.404000 audit: BPF prog-id=22 op=UNLOAD Jan 20 02:32:03.408000 audit: BPF prog-id=41 op=LOAD Jan 20 02:32:03.408000 audit: BPF prog-id=42 op=LOAD Jan 20 02:32:03.408000 audit: BPF prog-id=23 op=UNLOAD Jan 20 02:32:03.408000 audit: BPF prog-id=24 op=UNLOAD Jan 20 02:32:03.536571 systemd[1]: Reload requested from client PID 1375 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:32:03.536594 systemd[1]: Reloading... Jan 20 02:32:03.592478 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:32:03.592548 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:32:03.596316 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:32:03.613758 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 20 02:32:03.618810 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 20 02:32:03.726654 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:32:03.726698 systemd-tmpfiles[1376]: Skipping /boot Jan 20 02:32:03.783672 systemd-udevd[1377]: Using default interface naming scheme 'v257'. Jan 20 02:32:03.918239 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:32:03.918262 systemd-tmpfiles[1376]: Skipping /boot Jan 20 02:32:05.088500 zram_generator::config[1429]: No configuration found. Jan 20 02:32:06.476799 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 02:32:06.476947 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:32:06.514556 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:32:07.136176 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:32:07.137661 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:32:07.845083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:32:07.867805 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:32:07.868907 systemd[1]: Reloading finished in 4328 ms. Jan 20 02:32:08.017373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:32:08.130903 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 20 02:32:08.131072 kernel: audit: type=1130 audit(1768876328.048:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:08.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:08.123000 audit: BPF prog-id=43 op=LOAD Jan 20 02:32:08.173176 kernel: audit: type=1334 audit(1768876328.123:180): prog-id=43 op=LOAD Jan 20 02:32:08.123000 audit: BPF prog-id=33 op=UNLOAD Jan 20 02:32:08.123000 audit: BPF prog-id=44 op=LOAD Jan 20 02:32:08.217128 kernel: audit: type=1334 audit(1768876328.123:181): prog-id=33 op=UNLOAD Jan 20 02:32:08.217246 kernel: audit: type=1334 audit(1768876328.123:182): prog-id=44 op=LOAD Jan 20 02:32:08.217271 kernel: audit: type=1334 audit(1768876328.123:183): prog-id=45 op=LOAD Jan 20 02:32:08.220282 kernel: audit: type=1334 audit(1768876328.123:184): prog-id=34 op=UNLOAD Jan 20 02:32:08.220309 kernel: audit: type=1334 audit(1768876328.123:185): prog-id=35 op=UNLOAD Jan 20 02:32:08.220347 kernel: audit: type=1334 audit(1768876328.146:186): prog-id=46 op=LOAD Jan 20 02:32:08.220372 kernel: audit: type=1334 audit(1768876328.146:187): prog-id=39 op=UNLOAD Jan 20 02:32:08.220406 kernel: audit: type=1334 audit(1768876328.146:188): prog-id=47 op=LOAD Jan 20 02:32:08.123000 audit: BPF prog-id=45 op=LOAD Jan 20 02:32:08.123000 audit: BPF prog-id=34 op=UNLOAD Jan 20 02:32:08.123000 audit: BPF prog-id=35 op=UNLOAD Jan 20 02:32:08.146000 audit: BPF prog-id=46 op=LOAD Jan 20 02:32:08.146000 audit: BPF prog-id=39 op=UNLOAD Jan 20 02:32:08.146000 audit: BPF prog-id=47 op=LOAD Jan 20 02:32:08.146000 audit: BPF prog-id=48 op=LOAD Jan 20 02:32:08.146000 audit: BPF prog-id=28 op=UNLOAD Jan 20 02:32:08.146000 audit: BPF prog-id=29 op=UNLOAD Jan 20 02:32:08.169000 audit: BPF prog-id=49 op=LOAD Jan 20 02:32:08.169000 audit: BPF prog-id=30 op=UNLOAD Jan 20 02:32:08.169000 audit: BPF prog-id=50 op=LOAD Jan 20 02:32:08.169000 audit: BPF prog-id=51 op=LOAD Jan 20 02:32:08.169000 audit: BPF prog-id=31 op=UNLOAD Jan 20 02:32:08.169000 audit: BPF prog-id=32 op=UNLOAD Jan 20 02:32:08.192000 audit: BPF prog-id=52 op=LOAD Jan 20 02:32:08.192000 audit: BPF prog-id=36 op=UNLOAD Jan 20 02:32:08.192000 audit: BPF prog-id=53 op=LOAD Jan 20 02:32:08.192000 audit: BPF prog-id=54 op=LOAD Jan 20 02:32:08.192000 audit: BPF prog-id=37 op=UNLOAD Jan 20 02:32:08.192000 audit: BPF prog-id=38 op=UNLOAD Jan 20 02:32:08.285000 audit: BPF prog-id=55 op=LOAD Jan 20 02:32:08.285000 audit: BPF prog-id=40 op=UNLOAD Jan 20 02:32:08.285000 audit: BPF prog-id=56 op=LOAD Jan 20 02:32:08.285000 audit: BPF prog-id=57 op=LOAD Jan 20 02:32:08.285000 audit: BPF prog-id=41 op=UNLOAD Jan 20 02:32:08.285000 audit: BPF prog-id=42 op=UNLOAD Jan 20 02:32:08.447454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:32:08.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:08.928877 systemd[1]: Finished ensure-sysext.service. Jan 20 02:32:08.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.094343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:32:09.117461 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:32:09.160121 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:32:09.180715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:32:09.219700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:32:09.283457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:32:09.375367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:32:09.442405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:32:09.443802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:32:09.444083 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:32:09.508401 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:32:09.547498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:32:09.592673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:32:09.621055 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:32:09.681000 audit: BPF prog-id=58 op=LOAD Jan 20 02:32:09.687653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:32:09.722000 audit: BPF prog-id=59 op=LOAD Jan 20 02:32:09.726256 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:32:09.791487 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:32:09.850309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:32:09.865089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:32:09.879316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:32:09.896449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:32:09.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.909483 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:32:09.913338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:32:09.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.936673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:32:09.937389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:32:09.947000 audit[1519]: SYSTEM_BOOT pid=1519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.958000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 02:32:09.958000 audit[1524]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff119b5990 a2=420 a3=0 items=0 ppid=1491 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:09.958000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:32:09.958807 augenrules[1524]: No rules Jan 20 02:32:09.975228 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:32:09.975699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:32:09.999154 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:32:09.999702 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:32:10.021734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:32:10.104253 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:32:10.104570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:32:10.162423 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:32:10.249512 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:32:10.801679 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:32:11.258644 systemd-networkd[1513]: lo: Link UP Jan 20 02:32:11.258660 systemd-networkd[1513]: lo: Gained carrier Jan 20 02:32:11.285151 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:32:11.288288 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:32:11.318990 systemd-networkd[1513]: eth0: Link UP Jan 20 02:32:11.325901 systemd-networkd[1513]: eth0: Gained carrier Jan 20 02:32:11.332186 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:32:11.605803 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:32:11.608291 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Jan 20 02:32:11.615663 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:32:11.615776 systemd-timesyncd[1516]: Initial clock synchronization to Tue 2026-01-20 02:32:11.560143 UTC. Jan 20 02:32:12.035054 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:32:12.088654 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:32:12.150379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:32:12.160426 systemd[1]: Reached target network.target - Network. Jan 20 02:32:12.165005 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:32:12.175087 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:32:12.212780 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:32:12.239193 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:32:12.558523 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:32:13.329090 systemd-networkd[1513]: eth0: Gained IPv6LL Jan 20 02:32:13.354776 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:32:13.404247 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:32:14.323051 ldconfig[1503]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:32:14.357052 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:32:14.397571 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:32:14.544125 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:32:14.548968 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:32:14.568648 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:32:14.589772 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:32:14.609173 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:32:14.633342 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:32:14.650262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:32:14.667974 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 02:32:14.697985 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 02:32:14.717808 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:32:14.726534 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:32:14.726603 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:32:14.738257 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:32:14.773698 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:32:14.818016 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:32:14.862765 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:32:14.893637 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:32:14.914979 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:32:14.973661 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:32:15.002124 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:32:15.028114 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:32:15.040146 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:32:15.048999 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:32:15.058785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:32:15.060639 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:32:15.073448 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:32:15.088935 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:32:15.159754 kernel: kvm_amd: TSC scaling supported Jan 20 02:32:15.159976 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:32:15.160021 kernel: kvm_amd: Nested Paging enabled Jan 20 02:32:15.160083 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:32:15.160117 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:32:15.235386 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:32:15.264207 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:32:15.300060 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:32:15.343492 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:32:15.368385 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:32:15.397985 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:32:15.447784 jq[1561]: false Jan 20 02:32:15.468573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:15.496193 extend-filesystems[1562]: Found /dev/vda6 Jan 20 02:32:15.586359 extend-filesystems[1562]: Found /dev/vda9 Jan 20 02:32:15.525186 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:32:15.539530 oslogin_cache_refresh[1563]: Refreshing passwd entry cache Jan 20 02:32:15.633427 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing passwd entry cache Jan 20 02:32:15.633427 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting users, quitting Jan 20 02:32:15.633427 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:32:15.633427 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing group entry cache Jan 20 02:32:15.633809 extend-filesystems[1562]: Checking size of /dev/vda9 Jan 20 02:32:15.584894 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:32:15.627253 oslogin_cache_refresh[1563]: Failure getting users, quitting Jan 20 02:32:15.631210 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:32:15.627301 oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:32:15.627427 oslogin_cache_refresh[1563]: Refreshing group entry cache Jan 20 02:32:15.723207 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting groups, quitting Jan 20 02:32:15.723207 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:32:15.723100 oslogin_cache_refresh[1563]: Failure getting groups, quitting Jan 20 02:32:15.723126 oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:32:15.724167 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:32:15.779776 extend-filesystems[1562]: Resized partition /dev/vda9 Jan 20 02:32:15.780806 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:32:15.873779 extend-filesystems[1588]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:32:16.006930 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 02:32:16.098120 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:32:16.113446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:32:16.114464 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:32:16.135778 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:32:16.299883 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 02:32:16.309083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:32:16.412647 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:32:16.584214 update_engine[1597]: I20260120 02:32:16.401796 1597 main.cc:92] Flatcar Update Engine starting Jan 20 02:32:16.436234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:32:16.436946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:32:16.585062 jq[1598]: true Jan 20 02:32:16.441032 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:32:16.490690 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:32:16.537033 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:32:16.537503 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:32:16.564423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:32:16.598472 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:32:16.604167 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:32:16.604167 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:32:16.604167 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 02:32:16.706375 extend-filesystems[1562]: Resized filesystem in /dev/vda9 Jan 20 02:32:16.639918 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:32:16.641964 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:32:16.751647 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:32:16.752345 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:32:16.872894 jq[1616]: true Jan 20 02:32:16.892259 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:32:16.907561 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:32:16.908096 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:32:17.021340 tar[1607]: linux-amd64/LICENSE Jan 20 02:32:17.029795 tar[1607]: linux-amd64/helm Jan 20 02:32:17.068248 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:32:17.071994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:32:17.145385 systemd-logind[1595]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:32:17.153585 systemd-logind[1595]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:32:17.154235 systemd-logind[1595]: New seat seat0. Jan 20 02:32:17.188670 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:32:17.222471 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:32:17.223095 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:32:17.235002 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:32:17.258734 dbus-daemon[1559]: [system] SELinux support is enabled Jan 20 02:32:17.262933 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:32:17.280596 update_engine[1597]: I20260120 02:32:17.280462 1597 update_check_scheduler.cc:74] Next update check in 2m15s Jan 20 02:32:17.284973 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:32:17.285035 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:32:17.297714 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:32:17.299911 dbus-daemon[1559]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 02:32:17.297773 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:32:17.312548 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:32:17.343564 bash[1654]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:32:17.349462 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:32:17.368050 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:32:17.374166 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:32:17.400086 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:32:17.443187 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:32:17.453106 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:32:17.464587 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:32:17.665253 locksmithd[1663]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:32:17.856875 containerd[1618]: time="2026-01-20T02:32:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:32:17.856875 containerd[1618]: time="2026-01-20T02:32:17.855229891Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905089387Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.144µs" Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905143592Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905204183Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905219923Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905640214Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905662529Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905741379Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.905756150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.911427454Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.911523193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.911570462Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:32:17.913512 containerd[1618]: time="2026-01-20T02:32:17.911588011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912064475Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912093247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912249847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912639408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912699079Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912720095Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:32:17.914266 containerd[1618]: time="2026-01-20T02:32:17.912770303Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:32:17.926755 containerd[1618]: time="2026-01-20T02:32:17.916185146Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:32:17.926755 containerd[1618]: time="2026-01-20T02:32:17.916405285Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:32:17.971157 containerd[1618]: time="2026-01-20T02:32:17.970982726Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976423960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976611261Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976633826Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976674790Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976692949Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976744026Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976760566Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976778235Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976805486Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976882297Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976901915Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976917445Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:32:17.986656 containerd[1618]: time="2026-01-20T02:32:17.976934544Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977197125Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977227916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977249822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977265542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977281691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977296202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977311592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977390852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977418444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977464294Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977481173Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977519219Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977590973Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977611050Z" level=info msg="Start snapshots syncer" Jan 20 02:32:17.991345 containerd[1618]: time="2026-01-20T02:32:17.977646928Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:32:17.998865 containerd[1618]: time="2026-01-20T02:32:17.978073624Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:32:17.998865 containerd[1618]: time="2026-01-20T02:32:17.978140662Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978196575Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978392281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978431306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978449564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978465914Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978483521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978502950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978519959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978542994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978560223Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978604804Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978626550Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:32:18.000397 containerd[1618]: time="2026-01-20T02:32:17.978642600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978657720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978671951Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978685822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978701892Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978719250Z" level=info msg="runtime interface created" Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978728485Z" level=info msg="created NRI interface" Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978740357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978761564Z" level=info msg="Connect containerd service" Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.978785648Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:32:18.001641 containerd[1618]: time="2026-01-20T02:32:17.988447309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:32:18.231958 tar[1607]: linux-amd64/README.md Jan 20 02:32:18.355191 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:32:18.514211 containerd[1618]: time="2026-01-20T02:32:18.514083500Z" level=info msg="Start subscribing containerd event" Jan 20 02:32:18.517476 containerd[1618]: time="2026-01-20T02:32:18.517178063Z" level=info msg="Start recovering state" Jan 20 02:32:18.519602 containerd[1618]: time="2026-01-20T02:32:18.518428569Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:32:18.519602 containerd[1618]: time="2026-01-20T02:32:18.518505872Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.522156237Z" level=info msg="Start event monitor" Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.522236508Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.522250471Z" level=info msg="Start streaming server" Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.522351212Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.522363227Z" level=info msg="runtime interface starting up..." Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.522371173Z" level=info msg="starting plugins..." Jan 20 02:32:18.528987 containerd[1618]: time="2026-01-20T02:32:18.523426443Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:32:18.545218 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:32:18.564221 containerd[1618]: time="2026-01-20T02:32:18.561766562Z" level=info msg="containerd successfully booted in 0.708501s" Jan 20 02:32:19.413148 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:32:21.085088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:32:21.115240 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:32:21.138041 systemd[1]: Startup finished in 22.412s (kernel) + 47.532s (initrd) + 38.865s (userspace) = 1min 48.810s. Jan 20 02:32:21.166623 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:32:21.618652 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:32:21.640470 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:39158.service - OpenSSH per-connection server daemon (10.0.0.1:39158). Jan 20 02:32:22.242199 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 39158 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:22.258749 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:22.453894 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:32:22.482511 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:32:22.546233 systemd-logind[1595]: New session 1 of user core. Jan 20 02:32:22.659336 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:32:22.707660 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:32:22.812145 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:22.886645 systemd-logind[1595]: New session 2 of user core. Jan 20 02:32:23.850106 systemd[1715]: Queued start job for default target default.target. Jan 20 02:32:23.871435 systemd[1715]: Created slice app.slice - User Application Slice. Jan 20 02:32:23.871552 systemd[1715]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 02:32:23.871580 systemd[1715]: Reached target paths.target - Paths. Jan 20 02:32:23.872368 systemd[1715]: Reached target timers.target - Timers. Jan 20 02:32:23.887642 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:32:23.896914 systemd[1715]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 02:32:24.036332 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:32:24.042947 systemd[1715]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 02:32:24.043251 systemd[1715]: Reached target sockets.target - Sockets. Jan 20 02:32:24.043497 systemd[1715]: Reached target basic.target - Basic System. Jan 20 02:32:24.043728 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:32:24.048968 systemd[1715]: Reached target default.target - Main User Target. Jan 20 02:32:24.049049 systemd[1715]: Startup finished in 1.126s. Jan 20 02:32:24.103542 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:32:24.355524 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:39170.service - OpenSSH per-connection server daemon (10.0.0.1:39170). Jan 20 02:32:26.465344 kubelet[1698]: E0120 02:32:26.403876 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:32:26.955659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:32:26.959935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:32:27.026185 systemd[1]: kubelet.service: Consumed 1.764s CPU time, 268.4M memory peak. Jan 20 02:32:28.839041 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 39170 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:29.404175 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:30.408883 systemd-logind[1595]: New session 3 of user core. Jan 20 02:32:30.498024 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:32:36.427310 sshd[1735]: Connection closed by 10.0.0.1 port 39170 Jan 20 02:32:36.499371 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:37.442446 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:39170.service: Deactivated successfully. Jan 20 02:32:37.545748 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:32:39.133600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:32:42.602305 systemd-logind[1595]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:32:42.622183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:42.645423 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:60976.service - OpenSSH per-connection server daemon (10.0.0.1:60976). Jan 20 02:32:42.652066 systemd-logind[1595]: Removed session 3. Jan 20 02:32:43.632301 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 60976 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:43.682485 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:43.752968 systemd-logind[1595]: New session 4 of user core. Jan 20 02:32:43.814269 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:32:43.917633 sshd[1749]: Connection closed by 10.0.0.1 port 60976 Jan 20 02:32:43.917192 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:43.948175 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:60976.service: Deactivated successfully. Jan 20 02:32:43.953534 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:32:43.970679 systemd-logind[1595]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:32:43.983569 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:60992.service - OpenSSH per-connection server daemon (10.0.0.1:60992). Jan 20 02:32:43.992334 systemd-logind[1595]: Removed session 4. Jan 20 02:32:44.208517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:32:44.228895 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 60992 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:44.234261 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:44.252507 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:32:44.266743 systemd-logind[1595]: New session 5 of user core. Jan 20 02:32:44.276068 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:32:44.429917 sshd[1771]: Connection closed by 10.0.0.1 port 60992 Jan 20 02:32:44.441944 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:44.484484 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:60992.service: Deactivated successfully. Jan 20 02:32:44.515464 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:32:44.534190 systemd-logind[1595]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:32:44.577400 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:55382.service - OpenSSH per-connection server daemon (10.0.0.1:55382). Jan 20 02:32:44.581203 systemd-logind[1595]: Removed session 5. Jan 20 02:32:44.695711 kubelet[1765]: E0120 02:32:44.695400 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:32:44.730665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:32:44.731051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:32:44.739501 systemd[1]: kubelet.service: Consumed 572ms CPU time, 110.1M memory peak. Jan 20 02:32:44.908134 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 55382 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:44.928454 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:44.998924 systemd-logind[1595]: New session 6 of user core. Jan 20 02:32:45.028913 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:32:45.256769 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:32:45.258741 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:32:50.441016 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:32:50.499879 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:32:55.116937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:32:55.179621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:32:56.056200 dockerd[1806]: time="2026-01-20T02:32:56.055916926Z" level=info msg="Starting up" Jan 20 02:32:56.068910 dockerd[1806]: time="2026-01-20T02:32:56.064107361Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:32:56.191513 dockerd[1806]: time="2026-01-20T02:32:56.191323165Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:32:56.655603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:32:57.035681 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:32:58.386324 dockerd[1806]: time="2026-01-20T02:32:58.380677471Z" level=info msg="Loading containers: start." Jan 20 02:32:58.593616 kubelet[1838]: E0120 02:32:58.593076 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:32:58.598996 kernel: Initializing XFRM netlink socket Jan 20 02:32:58.626297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:32:58.626659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:32:58.632687 systemd[1]: kubelet.service: Consumed 721ms CPU time, 112.5M memory peak. Jan 20 02:33:02.340004 update_engine[1597]: I20260120 02:33:02.332540 1597 update_attempter.cc:509] Updating boot flags... Jan 20 02:33:04.383559 systemd-networkd[1513]: docker0: Link UP Jan 20 02:33:04.468062 dockerd[1806]: time="2026-01-20T02:33:04.467095771Z" level=info msg="Loading containers: done." Jan 20 02:33:04.659142 dockerd[1806]: time="2026-01-20T02:33:04.652201302Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:33:04.659142 dockerd[1806]: time="2026-01-20T02:33:04.657227458Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:33:04.659142 dockerd[1806]: time="2026-01-20T02:33:04.657482224Z" level=info msg="Initializing buildkit" Jan 20 02:33:04.934510 dockerd[1806]: time="2026-01-20T02:33:04.933988838Z" level=info msg="Completed buildkit initialization" Jan 20 02:33:05.033016 dockerd[1806]: time="2026-01-20T02:33:05.030996463Z" level=info msg="Daemon has completed initialization" Jan 20 02:33:05.031753 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:33:05.038080 dockerd[1806]: time="2026-01-20T02:33:05.033780636Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:33:08.556545 containerd[1618]: time="2026-01-20T02:33:08.556476932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 02:33:08.838356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:33:08.879603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:33:09.643936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:09.696550 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:33:10.208921 kubelet[2066]: E0120 02:33:10.207430 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:33:10.217946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:33:10.218310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:33:10.219220 systemd[1]: kubelet.service: Consumed 472ms CPU time, 110.5M memory peak. Jan 20 02:33:11.279872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479035283.mount: Deactivated successfully. Jan 20 02:33:20.336954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:33:20.360246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:33:20.777224 containerd[1618]: time="2026-01-20T02:33:20.776474537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:20.794172 containerd[1618]: time="2026-01-20T02:33:20.794113632Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28229480" Jan 20 02:33:20.818902 containerd[1618]: time="2026-01-20T02:33:20.813376631Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:20.864373 containerd[1618]: time="2026-01-20T02:33:20.860877711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:20.911241 containerd[1618]: time="2026-01-20T02:33:20.905251303Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 12.346756271s" Jan 20 02:33:20.911241 containerd[1618]: time="2026-01-20T02:33:20.905342381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 02:33:20.911241 containerd[1618]: time="2026-01-20T02:33:20.906203884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 02:33:21.554452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:21.610985 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:33:21.902988 kubelet[2141]: E0120 02:33:21.901736 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:33:21.932522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:33:21.932796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:33:21.933784 systemd[1]: kubelet.service: Consumed 402ms CPU time, 109.9M memory peak. Jan 20 02:33:27.966201 containerd[1618]: time="2026-01-20T02:33:27.964704668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:27.974401 containerd[1618]: time="2026-01-20T02:33:27.972950943Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 02:33:27.987256 containerd[1618]: time="2026-01-20T02:33:27.985743483Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:28.028375 containerd[1618]: time="2026-01-20T02:33:28.025312190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:28.028375 containerd[1618]: time="2026-01-20T02:33:28.026687610Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 7.120448441s" Jan 20 02:33:28.028375 containerd[1618]: time="2026-01-20T02:33:28.026721492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 02:33:28.029183 containerd[1618]: time="2026-01-20T02:33:28.028814072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 02:33:32.091131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:33:32.154347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:33:33.206236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:33.239435 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:33:33.410925 kubelet[2166]: E0120 02:33:33.410625 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:33:33.425539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:33:33.425878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:33:33.426540 systemd[1]: kubelet.service: Consumed 383ms CPU time, 110.6M memory peak. Jan 20 02:33:33.871410 containerd[1618]: time="2026-01-20T02:33:33.870981901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:33.876967 containerd[1618]: time="2026-01-20T02:33:33.876532089Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 02:33:33.890920 containerd[1618]: time="2026-01-20T02:33:33.887452440Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:33.895413 containerd[1618]: time="2026-01-20T02:33:33.895084030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:33.900347 containerd[1618]: time="2026-01-20T02:33:33.898209889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 5.869308346s" Jan 20 02:33:33.900347 containerd[1618]: time="2026-01-20T02:33:33.898250725Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 02:33:33.900347 containerd[1618]: time="2026-01-20T02:33:33.899948385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 02:33:38.007744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278150925.mount: Deactivated successfully. Jan 20 02:33:43.604746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:33:43.623307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:33:45.267901 containerd[1618]: time="2026-01-20T02:33:45.267152795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:45.277190 containerd[1618]: time="2026-01-20T02:33:45.277133423Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 20 02:33:45.285638 containerd[1618]: time="2026-01-20T02:33:45.285517092Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:45.291883 containerd[1618]: time="2026-01-20T02:33:45.291388393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:33:45.293951 containerd[1618]: time="2026-01-20T02:33:45.293909147Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 11.393926268s" Jan 20 02:33:45.294951 containerd[1618]: time="2026-01-20T02:33:45.294233790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 02:33:45.302413 containerd[1618]: time="2026-01-20T02:33:45.302164448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 02:33:45.753718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:33:45.825044 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:33:46.524540 kubelet[2189]: E0120 02:33:46.524199 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:33:46.560382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:33:46.563205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:33:46.570241 systemd[1]: kubelet.service: Consumed 1.023s CPU time, 109.8M memory peak. Jan 20 02:33:49.375891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382300399.mount: Deactivated successfully. Jan 20 02:33:56.864918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:33:56.883421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:34:00.005723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:34:00.160447 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:34:01.734668 kubelet[2254]: E0120 02:34:01.733800 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:34:01.775170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:34:01.775455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:34:01.791011 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 110M memory peak. Jan 20 02:34:11.931770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:34:12.004661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:34:13.563912 containerd[1618]: time="2026-01-20T02:34:13.563480708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:34:13.574961 containerd[1618]: time="2026-01-20T02:34:13.574186851Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18432446" Jan 20 02:34:13.615527 containerd[1618]: time="2026-01-20T02:34:13.615455925Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:34:13.696207 containerd[1618]: time="2026-01-20T02:34:13.666988968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:34:13.710226 containerd[1618]: time="2026-01-20T02:34:13.690911298Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 28.388694473s" Jan 20 02:34:13.710226 containerd[1618]: time="2026-01-20T02:34:13.704793620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 02:34:13.726252 containerd[1618]: time="2026-01-20T02:34:13.715679750Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 02:34:17.935323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:34:18.184551 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:34:18.464315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398179302.mount: Deactivated successfully. Jan 20 02:34:18.540616 containerd[1618]: time="2026-01-20T02:34:18.539367582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:34:18.591326 containerd[1618]: time="2026-01-20T02:34:18.591245152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=317462" Jan 20 02:34:18.604251 containerd[1618]: time="2026-01-20T02:34:18.602938675Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:34:18.625963 containerd[1618]: time="2026-01-20T02:34:18.625875673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:34:18.631227 containerd[1618]: time="2026-01-20T02:34:18.629271621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.913541517s" Jan 20 02:34:18.631227 containerd[1618]: time="2026-01-20T02:34:18.629320362Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 02:34:18.632544 containerd[1618]: time="2026-01-20T02:34:18.631801964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 02:34:19.182173 kubelet[2274]: E0120 02:34:19.180820 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:34:19.686734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:34:19.687284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:34:19.704306 systemd[1]: kubelet.service: Consumed 1.703s CPU time, 110.7M memory peak. Jan 20 02:34:23.198744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560921236.mount: Deactivated successfully. Jan 20 02:34:29.884127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:34:29.986576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:34:32.442171 update_engine[1597]: I20260120 02:34:32.427156 1597 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:34:32.442171 update_engine[1597]: I20260120 02:34:32.435607 1597 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:34:32.478305 update_engine[1597]: I20260120 02:34:32.468804 1597 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:34:32.580910 update_engine[1597]: I20260120 02:34:32.578341 1597 omaha_request_params.cc:62] Current group set to alpha Jan 20 02:34:32.580910 update_engine[1597]: I20260120 02:34:32.578794 1597 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:34:32.580910 update_engine[1597]: I20260120 02:34:32.578864 1597 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:34:32.580910 update_engine[1597]: I20260120 02:34:32.578895 1597 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:34:32.596290 update_engine[1597]: I20260120 02:34:32.595175 1597 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:34:32.596290 update_engine[1597]: I20260120 02:34:32.595419 1597 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:34:32.596290 update_engine[1597]: I20260120 02:34:32.595443 1597 omaha_request_action.cc:272] Request: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: Jan 20 02:34:32.596290 update_engine[1597]: I20260120 02:34:32.595453 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:34:32.602055 locksmithd[1663]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:34:32.616909 update_engine[1597]: I20260120 02:34:32.616792 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:34:32.636184 update_engine[1597]: I20260120 02:34:32.633100 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:34:32.662585 update_engine[1597]: E20260120 02:34:32.662503 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:34:32.684438 update_engine[1597]: I20260120 02:34:32.670635 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:34:35.236011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:34:35.330352 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:34:37.031507 kubelet[2341]: E0120 02:34:37.030732 2341 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:34:37.167967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:34:37.170785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:34:37.179061 systemd[1]: kubelet.service: Consumed 1.612s CPU time, 109.1M memory peak. Jan 20 02:34:43.351046 update_engine[1597]: I20260120 02:34:43.347388 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:34:43.351046 update_engine[1597]: I20260120 02:34:43.353564 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:34:43.371998 update_engine[1597]: I20260120 02:34:43.359966 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:34:43.396680 update_engine[1597]: E20260120 02:34:43.394227 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:34:43.396680 update_engine[1597]: I20260120 02:34:43.394448 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:34:47.396863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:34:47.536875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:34:48.332289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:34:48.349298 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:34:48.548968 kubelet[2366]: E0120 02:34:48.547911 2366 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:34:48.558589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:34:48.558980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:34:48.559723 systemd[1]: kubelet.service: Consumed 422ms CPU time, 110.8M memory peak. Jan 20 02:34:53.365068 update_engine[1597]: I20260120 02:34:53.351682 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:34:53.365068 update_engine[1597]: I20260120 02:34:53.355119 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:34:53.365068 update_engine[1597]: I20260120 02:34:53.362040 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:34:53.456356 update_engine[1597]: E20260120 02:34:53.430029 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:34:53.557652 update_engine[1597]: I20260120 02:34:53.509886 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:34:58.509594 containerd[1618]: time="2026-01-20T02:34:58.501601705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:34:58.509594 containerd[1618]: time="2026-01-20T02:34:58.502875379Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57671516" Jan 20 02:34:58.535584 containerd[1618]: time="2026-01-20T02:34:58.515516153Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:34:58.535584 containerd[1618]: time="2026-01-20T02:34:58.524931942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:34:58.563485 containerd[1618]: time="2026-01-20T02:34:58.561727345Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 39.929825946s" Jan 20 02:34:58.563485 containerd[1618]: time="2026-01-20T02:34:58.561923091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 02:34:58.670515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:34:58.690175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:35:00.951040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:35:01.113673 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:35:01.703931 kubelet[2407]: E0120 02:35:01.700262 2407 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:35:01.730906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:35:01.740757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:35:01.773554 systemd[1]: kubelet.service: Consumed 822ms CPU time, 109.6M memory peak. Jan 20 02:35:03.369390 update_engine[1597]: I20260120 02:35:03.345279 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:35:03.369390 update_engine[1597]: I20260120 02:35:03.345716 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:35:03.369390 update_engine[1597]: I20260120 02:35:03.369032 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:35:03.398342 update_engine[1597]: E20260120 02:35:03.393991 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:35:03.398342 update_engine[1597]: I20260120 02:35:03.394721 1597 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:35:03.400338 update_engine[1597]: I20260120 02:35:03.398732 1597 omaha_request_action.cc:617] Omaha request response: Jan 20 02:35:03.400338 update_engine[1597]: E20260120 02:35:03.399132 1597 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:35:03.404573 update_engine[1597]: I20260120 02:35:03.401038 1597 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:35:03.404573 update_engine[1597]: I20260120 02:35:03.403024 1597 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.404906 1597 update_attempter.cc:306] Processing Done. Jan 20 02:35:03.406037 update_engine[1597]: E20260120 02:35:03.404976 1597 update_attempter.cc:619] Update failed. Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.404996 1597 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405007 1597 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405018 1597 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405201 1597 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405248 1597 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405262 1597 omaha_request_action.cc:272] Request: Jan 20 02:35:03.406037 update_engine[1597]: Jan 20 02:35:03.406037 update_engine[1597]: Jan 20 02:35:03.406037 update_engine[1597]: Jan 20 02:35:03.406037 update_engine[1597]: Jan 20 02:35:03.406037 update_engine[1597]: Jan 20 02:35:03.406037 update_engine[1597]: Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405275 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405354 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:35:03.406037 update_engine[1597]: I20260120 02:35:03.405875 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:35:03.413449 locksmithd[1663]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:35:03.439020 update_engine[1597]: E20260120 02:35:03.438387 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438544 1597 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438565 1597 omaha_request_action.cc:617] Omaha request response: Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438580 1597 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438592 1597 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438602 1597 update_attempter.cc:306] Processing Done. Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438616 1597 update_attempter.cc:310] Error event sent. Jan 20 02:35:03.439020 update_engine[1597]: I20260120 02:35:03.438644 1597 update_check_scheduler.cc:74] Next update check in 42m39s Jan 20 02:35:03.448797 locksmithd[1663]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:35:09.511404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:35:09.511722 systemd[1]: kubelet.service: Consumed 822ms CPU time, 109.6M memory peak. Jan 20 02:35:09.525815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:35:09.702233 systemd[1]: Reload requested from client PID 2426 ('systemctl') (unit session-6.scope)... Jan 20 02:35:09.702521 systemd[1]: Reloading... Jan 20 02:35:10.306533 zram_generator::config[2470]: No configuration found. Jan 20 02:35:12.481571 systemd[1]: Reloading finished in 2778 ms. Jan 20 02:35:13.214697 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 02:35:13.216094 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 02:35:13.222301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:35:13.225566 systemd[1]: kubelet.service: Consumed 344ms CPU time, 98.5M memory peak. Jan 20 02:35:13.280733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:35:14.747667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:35:14.822091 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:35:15.382811 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:35:15.382811 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:35:15.392650 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:35:15.392650 kubelet[2521]: I0120 02:35:15.384290 2521 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:35:20.469890 kubelet[2521]: I0120 02:35:20.465113 2521 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:35:20.469890 kubelet[2521]: I0120 02:35:20.465187 2521 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:35:20.469890 kubelet[2521]: I0120 02:35:20.466151 2521 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:35:20.777125 kubelet[2521]: E0120 02:35:20.775038 2521 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:20.795142 kubelet[2521]: I0120 02:35:20.790947 2521 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:35:20.831909 kubelet[2521]: I0120 02:35:20.830263 2521 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:35:20.872655 kubelet[2521]: I0120 02:35:20.869787 2521 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:35:20.873974 kubelet[2521]: I0120 02:35:20.873470 2521 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:35:20.875629 kubelet[2521]: I0120 02:35:20.873726 2521 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:35:20.875629 kubelet[2521]: I0120 02:35:20.874471 2521 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:35:20.875629 kubelet[2521]: I0120 02:35:20.874493 2521 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:35:20.875629 kubelet[2521]: I0120 02:35:20.874744 2521 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:35:20.889236 kubelet[2521]: I0120 02:35:20.887128 2521 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:35:20.889236 kubelet[2521]: I0120 02:35:20.887860 2521 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:35:20.891755 kubelet[2521]: I0120 02:35:20.890504 2521 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:35:20.891755 kubelet[2521]: I0120 02:35:20.890532 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:35:20.895964 kubelet[2521]: W0120 02:35:20.895006 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:20.895964 kubelet[2521]: E0120 02:35:20.895095 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:20.911874 kubelet[2521]: I0120 02:35:20.909930 2521 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:35:20.911874 kubelet[2521]: W0120 02:35:20.911141 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:20.911874 kubelet[2521]: E0120 02:35:20.911215 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:20.920530 kubelet[2521]: I0120 02:35:20.918738 2521 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:35:20.920530 kubelet[2521]: W0120 02:35:20.918941 2521 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:35:20.941458 kubelet[2521]: I0120 02:35:20.941010 2521 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:35:20.948473 kubelet[2521]: I0120 02:35:20.941759 2521 server.go:1287] "Started kubelet" Jan 20 02:35:20.948473 kubelet[2521]: I0120 02:35:20.942234 2521 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:35:20.948473 kubelet[2521]: I0120 02:35:20.943693 2521 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:35:20.957736 kubelet[2521]: I0120 02:35:20.952282 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:35:20.976301 kubelet[2521]: I0120 02:35:20.966479 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:35:20.990314 kubelet[2521]: I0120 02:35:20.984050 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:35:20.990314 kubelet[2521]: I0120 02:35:20.989464 2521 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:35:20.990314 kubelet[2521]: E0120 02:35:20.989702 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:20.990970 kubelet[2521]: I0120 02:35:20.990944 2521 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:35:20.991168 kubelet[2521]: I0120 02:35:20.991151 2521 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:35:20.992153 kubelet[2521]: W0120 02:35:20.992103 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:20.992280 kubelet[2521]: E0120 02:35:20.992254 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:20.992482 kubelet[2521]: I0120 02:35:20.992461 2521 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:35:21.000495 kubelet[2521]: I0120 02:35:20.998556 2521 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:35:21.000495 kubelet[2521]: I0120 02:35:20.998725 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:35:21.000495 kubelet[2521]: E0120 02:35:20.999341 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Jan 20 02:35:21.016561 kubelet[2521]: E0120 02:35:21.005456 2521 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4fd2ea38387e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:35:20.94104179 +0000 UTC m=+6.087876269,LastTimestamp:2026-01-20 02:35:20.94104179 +0000 UTC m=+6.087876269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:35:21.022162 kubelet[2521]: E0120 02:35:21.021927 2521 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:35:21.046924 kubelet[2521]: I0120 02:35:21.045327 2521 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:35:21.091439 kubelet[2521]: E0120 02:35:21.090903 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:21.132874 kubelet[2521]: I0120 02:35:21.130099 2521 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:35:21.132874 kubelet[2521]: I0120 02:35:21.130122 2521 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:35:21.132874 kubelet[2521]: I0120 02:35:21.130172 2521 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:35:21.143632 kubelet[2521]: I0120 02:35:21.143499 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:35:21.161505 kubelet[2521]: I0120 02:35:21.156883 2521 policy_none.go:49] "None policy: Start" Jan 20 02:35:21.161505 kubelet[2521]: I0120 02:35:21.156967 2521 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:35:21.161505 kubelet[2521]: I0120 02:35:21.157046 2521 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:35:21.174101 kubelet[2521]: I0120 02:35:21.167466 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:35:21.174101 kubelet[2521]: I0120 02:35:21.167514 2521 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:35:21.174101 kubelet[2521]: I0120 02:35:21.167542 2521 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:35:21.174101 kubelet[2521]: I0120 02:35:21.167552 2521 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:35:21.177996 kubelet[2521]: E0120 02:35:21.167638 2521 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:35:21.177996 kubelet[2521]: W0120 02:35:21.176814 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:21.177996 kubelet[2521]: E0120 02:35:21.176942 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:21.194245 kubelet[2521]: E0120 02:35:21.192538 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:21.204200 kubelet[2521]: E0120 02:35:21.201955 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Jan 20 02:35:21.245366 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:35:21.276416 kubelet[2521]: E0120 02:35:21.276260 2521 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:35:21.302887 kubelet[2521]: E0120 02:35:21.295978 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:21.328667 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:35:21.394540 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:35:21.397220 kubelet[2521]: E0120 02:35:21.397171 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:21.429430 kubelet[2521]: I0120 02:35:21.424985 2521 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:35:21.429430 kubelet[2521]: I0120 02:35:21.426586 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:35:21.429430 kubelet[2521]: I0120 02:35:21.426600 2521 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:35:21.429430 kubelet[2521]: I0120 02:35:21.427065 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:35:21.444238 kubelet[2521]: E0120 02:35:21.443279 2521 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:35:21.444238 kubelet[2521]: E0120 02:35:21.443363 2521 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:35:21.504537 kubelet[2521]: I0120 02:35:21.502661 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1920566f175a8fb1774eb872fd23f484-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1920566f175a8fb1774eb872fd23f484\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:35:21.504537 kubelet[2521]: I0120 02:35:21.502697 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:35:21.504537 kubelet[2521]: I0120 02:35:21.502723 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:35:21.504537 kubelet[2521]: I0120 02:35:21.502748 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:35:21.504537 kubelet[2521]: I0120 02:35:21.502775 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:35:21.505344 kubelet[2521]: I0120 02:35:21.502795 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1920566f175a8fb1774eb872fd23f484-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1920566f175a8fb1774eb872fd23f484\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:35:21.505344 kubelet[2521]: I0120 02:35:21.502816 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1920566f175a8fb1774eb872fd23f484-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1920566f175a8fb1774eb872fd23f484\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:35:21.505344 kubelet[2521]: I0120 02:35:21.502894 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:35:21.505344 kubelet[2521]: I0120 02:35:21.502930 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:35:21.553646 kubelet[2521]: I0120 02:35:21.547553 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:21.560878 kubelet[2521]: E0120 02:35:21.557528 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jan 20 02:35:21.578312 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 02:35:21.622901 kubelet[2521]: E0120 02:35:21.618283 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Jan 20 02:35:21.797432 kubelet[2521]: E0120 02:35:21.796964 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:21.826509 kubelet[2521]: I0120 02:35:21.816010 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:21.822532 systemd[1]: Created slice kubepods-burstable-pod1920566f175a8fb1774eb872fd23f484.slice - libcontainer container kubepods-burstable-pod1920566f175a8fb1774eb872fd23f484.slice. Jan 20 02:35:21.842170 kubelet[2521]: E0120 02:35:21.816794 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jan 20 02:35:21.844776 kubelet[2521]: E0120 02:35:21.843452 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:21.852220 containerd[1618]: time="2026-01-20T02:35:21.846975760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 02:35:21.872249 kubelet[2521]: E0120 02:35:21.866431 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:21.872249 kubelet[2521]: E0120 02:35:21.866974 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:21.878290 containerd[1618]: time="2026-01-20T02:35:21.878178910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1920566f175a8fb1774eb872fd23f484,Namespace:kube-system,Attempt:0,}" Jan 20 02:35:21.909197 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 02:35:21.942232 kubelet[2521]: E0120 02:35:21.940563 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:21.942232 kubelet[2521]: E0120 02:35:21.941141 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:21.942720 containerd[1618]: time="2026-01-20T02:35:21.942670961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 02:35:21.996587 kubelet[2521]: W0120 02:35:21.996156 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:21.996587 kubelet[2521]: E0120 02:35:21.996247 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:22.091896 kubelet[2521]: W0120 02:35:22.091731 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:22.099664 kubelet[2521]: E0120 02:35:22.098223 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:22.206016 kubelet[2521]: W0120 02:35:22.198172 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:22.206016 kubelet[2521]: E0120 02:35:22.198257 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:22.480223 kubelet[2521]: I0120 02:35:22.432636 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:22.480223 kubelet[2521]: E0120 02:35:22.442741 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jan 20 02:35:22.480223 kubelet[2521]: E0120 02:35:22.460080 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Jan 20 02:35:22.510337 kubelet[2521]: W0120 02:35:22.508868 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:22.510337 kubelet[2521]: E0120 02:35:22.509071 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:22.532360 containerd[1618]: time="2026-01-20T02:35:22.527541755Z" level=info msg="connecting to shim 491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39" address="unix:///run/containerd/s/aad175da361848472374090c1494f031aa41f2784ea7ed0f070687a14b87b173" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:35:22.656588 containerd[1618]: time="2026-01-20T02:35:22.656472452Z" level=info msg="connecting to shim c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860" address="unix:///run/containerd/s/c7bc081edb9335d86b14ba87eb169bd9e81c422c7006535b3ea9b0880cf3997b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:35:22.953495 kubelet[2521]: E0120 02:35:22.948506 2521 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:23.470642 kubelet[2521]: I0120 02:35:23.470088 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:23.485994 kubelet[2521]: E0120 02:35:23.485877 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jan 20 02:35:23.827651 kubelet[2521]: W0120 02:35:23.826280 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:23.835155 kubelet[2521]: E0120 02:35:23.834535 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:23.861135 containerd[1618]: time="2026-01-20T02:35:23.861069266Z" level=info msg="connecting to shim b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8" address="unix:///run/containerd/s/2c6f1df074f26837b58e72c66949f6d981bebcb7a8532966a7fdfff4cc06c4bb" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:35:24.302513 kubelet[2521]: E0120 02:35:24.295315 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="3.2s" Jan 20 02:35:24.577161 systemd[1]: Started cri-containerd-491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39.scope - libcontainer container 491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39. Jan 20 02:35:24.632329 systemd[1]: Started cri-containerd-c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860.scope - libcontainer container c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860. Jan 20 02:35:24.947113 kubelet[2521]: W0120 02:35:24.938997 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:24.987757 kubelet[2521]: E0120 02:35:24.939330 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:24.987757 kubelet[2521]: W0120 02:35:24.961270 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:24.987757 kubelet[2521]: E0120 02:35:24.961325 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:25.019884 kubelet[2521]: W0120 02:35:25.016067 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:25.019884 kubelet[2521]: E0120 02:35:25.019457 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:25.112566 kubelet[2521]: I0120 02:35:25.105910 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:25.119653 kubelet[2521]: E0120 02:35:25.116125 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jan 20 02:35:25.573257 systemd[1]: Started cri-containerd-b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8.scope - libcontainer container b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8. Jan 20 02:35:26.590117 containerd[1618]: time="2026-01-20T02:35:26.589969155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1920566f175a8fb1774eb872fd23f484,Namespace:kube-system,Attempt:0,} returns sandbox id \"491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39\"" Jan 20 02:35:26.852131 kubelet[2521]: E0120 02:35:26.834034 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:26.868985 containerd[1618]: time="2026-01-20T02:35:26.868924240Z" level=info msg="CreateContainer within sandbox \"491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:35:26.962502 containerd[1618]: time="2026-01-20T02:35:26.960754428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860\"" Jan 20 02:35:26.994881 kubelet[2521]: E0120 02:35:26.994766 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:27.012000 containerd[1618]: time="2026-01-20T02:35:27.010692182Z" level=info msg="CreateContainer within sandbox \"c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:35:27.061224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852176888.mount: Deactivated successfully. Jan 20 02:35:27.062728 containerd[1618]: time="2026-01-20T02:35:27.062569489Z" level=info msg="Container b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:27.081878 containerd[1618]: time="2026-01-20T02:35:27.081619587Z" level=info msg="Container 35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:27.100661 containerd[1618]: time="2026-01-20T02:35:27.100565280Z" level=info msg="CreateContainer within sandbox \"491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4\"" Jan 20 02:35:27.104195 containerd[1618]: time="2026-01-20T02:35:27.102411430Z" level=info msg="StartContainer for \"b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4\"" Jan 20 02:35:27.106777 containerd[1618]: time="2026-01-20T02:35:27.106639611Z" level=info msg="connecting to shim b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4" address="unix:///run/containerd/s/aad175da361848472374090c1494f031aa41f2784ea7ed0f070687a14b87b173" protocol=ttrpc version=3 Jan 20 02:35:27.111756 containerd[1618]: time="2026-01-20T02:35:27.109054907Z" level=info msg="CreateContainer within sandbox \"c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f\"" Jan 20 02:35:27.112427 containerd[1618]: time="2026-01-20T02:35:27.112397773Z" level=info msg="StartContainer for \"35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f\"" Jan 20 02:35:27.114278 containerd[1618]: time="2026-01-20T02:35:27.114251817Z" level=info msg="connecting to shim 35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f" address="unix:///run/containerd/s/c7bc081edb9335d86b14ba87eb169bd9e81c422c7006535b3ea9b0880cf3997b" protocol=ttrpc version=3 Jan 20 02:35:27.137531 containerd[1618]: time="2026-01-20T02:35:27.137484150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8\"" Jan 20 02:35:27.143420 kubelet[2521]: E0120 02:35:27.139262 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:27.145701 kubelet[2521]: E0120 02:35:27.145618 2521 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:27.152200 containerd[1618]: time="2026-01-20T02:35:27.152152552Z" level=info msg="CreateContainer within sandbox \"b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:35:27.188949 systemd[1]: Started cri-containerd-35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f.scope - libcontainer container 35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f. Jan 20 02:35:27.215881 systemd[1]: Started cri-containerd-b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4.scope - libcontainer container b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4. Jan 20 02:35:27.231738 containerd[1618]: time="2026-01-20T02:35:27.231687888Z" level=info msg="Container d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:27.367262 containerd[1618]: time="2026-01-20T02:35:27.367020663Z" level=info msg="CreateContainer within sandbox \"b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f\"" Jan 20 02:35:27.373589 containerd[1618]: time="2026-01-20T02:35:27.373550131Z" level=info msg="StartContainer for \"d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f\"" Jan 20 02:35:27.387110 containerd[1618]: time="2026-01-20T02:35:27.387058360Z" level=info msg="connecting to shim d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f" address="unix:///run/containerd/s/2c6f1df074f26837b58e72c66949f6d981bebcb7a8532966a7fdfff4cc06c4bb" protocol=ttrpc version=3 Jan 20 02:35:27.425486 kubelet[2521]: E0120 02:35:27.425341 2521 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4fd2ea38387e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:35:20.94104179 +0000 UTC m=+6.087876269,LastTimestamp:2026-01-20 02:35:20.94104179 +0000 UTC m=+6.087876269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:35:27.513435 kubelet[2521]: E0120 02:35:27.513381 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="6.4s" Jan 20 02:35:27.525894 systemd[1]: Started cri-containerd-d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f.scope - libcontainer container d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f. Jan 20 02:35:27.792856 containerd[1618]: time="2026-01-20T02:35:27.792596440Z" level=info msg="StartContainer for \"35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f\" returns successfully" Jan 20 02:35:27.934455 kubelet[2521]: W0120 02:35:27.928996 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jan 20 02:35:27.934455 kubelet[2521]: E0120 02:35:27.929129 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:35:27.979518 containerd[1618]: time="2026-01-20T02:35:27.971597205Z" level=info msg="StartContainer for \"b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4\" returns successfully" Jan 20 02:35:28.157309 kubelet[2521]: E0120 02:35:28.157232 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:28.157855 kubelet[2521]: E0120 02:35:28.157783 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:28.337035 kubelet[2521]: I0120 02:35:28.335306 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:28.350416 containerd[1618]: time="2026-01-20T02:35:28.347621859Z" level=info msg="StartContainer for \"d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f\" returns successfully" Jan 20 02:35:29.244550 kubelet[2521]: E0120 02:35:29.239417 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:29.244550 kubelet[2521]: E0120 02:35:29.242638 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:29.244550 kubelet[2521]: E0120 02:35:29.249447 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:29.302041 kubelet[2521]: E0120 02:35:29.249633 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:30.322806 kubelet[2521]: E0120 02:35:30.320319 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:30.322806 kubelet[2521]: E0120 02:35:30.320748 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:30.376418 kubelet[2521]: E0120 02:35:30.367746 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:30.376418 kubelet[2521]: E0120 02:35:30.368041 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:31.430633 kubelet[2521]: E0120 02:35:31.429667 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:31.430633 kubelet[2521]: E0120 02:35:31.430259 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:31.457391 kubelet[2521]: E0120 02:35:31.451454 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:31.457391 kubelet[2521]: E0120 02:35:31.451638 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:31.457391 kubelet[2521]: E0120 02:35:31.451891 2521 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:35:34.722699 kubelet[2521]: E0120 02:35:34.720571 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:34.722699 kubelet[2521]: E0120 02:35:34.728286 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:34.743980 kubelet[2521]: E0120 02:35:34.743620 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:34.743980 kubelet[2521]: E0120 02:35:34.743925 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:36.639473 kubelet[2521]: E0120 02:35:36.629134 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:36.639473 kubelet[2521]: E0120 02:35:36.629569 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:38.338759 kubelet[2521]: E0120 02:35:38.338687 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 02:35:38.896650 kubelet[2521]: W0120 02:35:38.896376 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:35:38.896650 kubelet[2521]: E0120 02:35:38.896456 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:35:40.771706 kubelet[2521]: W0120 02:35:40.771562 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:35:40.771706 kubelet[2521]: E0120 02:35:40.771659 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:35:41.329803 kubelet[2521]: W0120 02:35:41.328680 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:35:41.329803 kubelet[2521]: E0120 02:35:41.328916 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:35:41.454478 kubelet[2521]: E0120 02:35:41.452958 2521 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:35:43.151530 kubelet[2521]: E0120 02:35:43.144657 2521 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 02:35:43.492433 kubelet[2521]: E0120 02:35:43.489237 2521 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4fd2ea38387e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:35:20.94104179 +0000 UTC m=+6.087876269,LastTimestamp:2026-01-20 02:35:20.94104179 +0000 UTC m=+6.087876269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:35:44.037501 kubelet[2521]: E0120 02:35:44.037209 2521 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 20 02:35:44.583773 kubelet[2521]: E0120 02:35:44.580482 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:44.583773 kubelet[2521]: E0120 02:35:44.581116 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:44.754191 kubelet[2521]: I0120 02:35:44.754080 2521 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:35:45.517437 kubelet[2521]: I0120 02:35:45.508877 2521 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:35:45.517437 kubelet[2521]: E0120 02:35:45.509106 2521 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:35:48.037160 kubelet[2521]: E0120 02:35:48.023556 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:35:48.037160 kubelet[2521]: E0120 02:35:48.025019 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:48.724383 kubelet[2521]: E0120 02:35:48.716191 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.150159 kubelet[2521]: E0120 02:35:48.878331 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.150159 kubelet[2521]: E0120 02:35:48.988725 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.150159 kubelet[2521]: E0120 02:35:49.201613 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.150159 kubelet[2521]: E0120 02:35:49.605185 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.437039 kubelet[2521]: E0120 02:35:50.247961 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.437039 kubelet[2521]: E0120 02:35:50.382808 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.517644 kubelet[2521]: E0120 02:35:50.486041 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:50.632015 kubelet[2521]: E0120 02:35:50.614658 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:51.002060 kubelet[2521]: E0120 02:35:50.763731 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:35:51.002060 kubelet[2521]: I0120 02:35:50.792112 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:35:51.465403 kubelet[2521]: I0120 02:35:51.330399 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:35:51.616550 kubelet[2521]: I0120 02:35:51.610307 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:35:51.616550 kubelet[2521]: I0120 02:35:51.614139 2521 apiserver.go:52] "Watching apiserver" Jan 20 02:35:51.661449 kubelet[2521]: E0120 02:35:51.661375 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:51.671493 kubelet[2521]: E0120 02:35:51.671021 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:51.692185 kubelet[2521]: E0120 02:35:51.690757 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:51.713198 kubelet[2521]: I0120 02:35:51.713050 2521 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:36:01.439768 kubelet[2521]: I0120 02:36:01.439394 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.439374195 podStartE2EDuration="10.439374195s" podCreationTimestamp="2026-01-20 02:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:36:01.421598445 +0000 UTC m=+46.568432944" watchObservedRunningTime="2026-01-20 02:36:01.439374195 +0000 UTC m=+46.586208674" Jan 20 02:36:01.701466 kubelet[2521]: I0120 02:36:01.699301 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.699275703 podStartE2EDuration="10.699275703s" podCreationTimestamp="2026-01-20 02:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:36:01.595957474 +0000 UTC m=+46.742791973" watchObservedRunningTime="2026-01-20 02:36:01.699275703 +0000 UTC m=+46.846110183" Jan 20 02:36:01.701466 kubelet[2521]: I0120 02:36:01.699753 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=10.699739203 podStartE2EDuration="10.699739203s" podCreationTimestamp="2026-01-20 02:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:36:01.698987736 +0000 UTC m=+46.845822215" watchObservedRunningTime="2026-01-20 02:36:01.699739203 +0000 UTC m=+46.846573693" Jan 20 02:36:04.608019 kubelet[2521]: E0120 02:36:04.599650 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:05.172538 kubelet[2521]: E0120 02:36:05.172120 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:05.175975 systemd[1]: Reload requested from client PID 2797 ('systemctl') (unit session-6.scope)... Jan 20 02:36:05.186712 systemd[1]: Reloading... Jan 20 02:36:06.582719 zram_generator::config[2844]: No configuration found. Jan 20 02:36:09.095283 systemd[1]: Reloading finished in 3907 ms. Jan 20 02:36:09.284522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:36:09.334211 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:36:09.334678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:36:09.334755 systemd[1]: kubelet.service: Consumed 5.952s CPU time, 137.6M memory peak. Jan 20 02:36:09.356608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:36:10.757374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:36:10.868364 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:36:11.824230 kubelet[2889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:36:11.824230 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:36:11.824230 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:36:11.825161 kubelet[2889]: I0120 02:36:11.825099 2889 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:36:12.117922 kubelet[2889]: I0120 02:36:12.116593 2889 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:36:12.117922 kubelet[2889]: I0120 02:36:12.116631 2889 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:36:12.117922 kubelet[2889]: I0120 02:36:12.117125 2889 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:36:12.164040 kubelet[2889]: I0120 02:36:12.150546 2889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 02:36:12.259587 kubelet[2889]: I0120 02:36:12.246456 2889 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:36:12.414712 kubelet[2889]: I0120 02:36:12.396216 2889 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:36:12.458962 kubelet[2889]: I0120 02:36:12.433681 2889 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:36:12.458962 kubelet[2889]: I0120 02:36:12.434298 2889 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:36:12.458962 kubelet[2889]: I0120 02:36:12.434333 2889 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:36:12.458962 kubelet[2889]: I0120 02:36:12.434744 2889 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.434762 2889 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.434987 2889 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.435203 2889 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.435234 2889 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.435347 2889 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.435370 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.453016 2889 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.453615 2889 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.454445 2889 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:36:12.459394 kubelet[2889]: I0120 02:36:12.454487 2889 server.go:1287] "Started kubelet" Jan 20 02:36:12.469994 kubelet[2889]: I0120 02:36:12.463461 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:36:12.469994 kubelet[2889]: E0120 02:36:12.468576 2889 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:36:12.469994 kubelet[2889]: I0120 02:36:12.469039 2889 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:36:12.470363 kubelet[2889]: I0120 02:36:12.470242 2889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:36:12.485068 kubelet[2889]: I0120 02:36:12.473393 2889 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:36:12.485068 kubelet[2889]: I0120 02:36:12.474779 2889 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:36:12.485068 kubelet[2889]: I0120 02:36:12.477195 2889 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:36:12.485068 kubelet[2889]: I0120 02:36:12.479156 2889 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:36:12.485068 kubelet[2889]: E0120 02:36:12.479293 2889 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:36:12.485068 kubelet[2889]: I0120 02:36:12.479643 2889 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:36:12.485068 kubelet[2889]: I0120 02:36:12.479812 2889 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:36:12.555108 kubelet[2889]: I0120 02:36:12.547149 2889 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:36:12.555108 kubelet[2889]: I0120 02:36:12.547193 2889 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:36:12.555108 kubelet[2889]: I0120 02:36:12.547372 2889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:36:12.644018 kubelet[2889]: E0120 02:36:12.636155 2889 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:36:12.743653 kubelet[2889]: E0120 02:36:12.742593 2889 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:36:12.945179 kubelet[2889]: I0120 02:36:12.943546 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:36:13.075475 kubelet[2889]: I0120 02:36:13.070564 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:36:13.075475 kubelet[2889]: I0120 02:36:13.070653 2889 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:36:13.075475 kubelet[2889]: I0120 02:36:13.070685 2889 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:36:13.075475 kubelet[2889]: I0120 02:36:13.070695 2889 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:36:13.075475 kubelet[2889]: E0120 02:36:13.070794 2889 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:36:13.176111 kubelet[2889]: E0120 02:36:13.176053 2889 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:36:13.502468 kubelet[2889]: E0120 02:36:13.414072 2889 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:36:13.502468 kubelet[2889]: I0120 02:36:13.488671 2889 apiserver.go:52] "Watching apiserver" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.744746 2889 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.744785 2889 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.744949 2889 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.745298 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.745322 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.745347 2889 policy_none.go:49] "None policy: Start" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.745364 2889 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.745383 2889 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:36:13.746634 kubelet[2889]: I0120 02:36:13.745561 2889 state_mem.go:75] "Updated machine memory state" Jan 20 02:36:13.814212 kubelet[2889]: I0120 02:36:13.812369 2889 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:36:13.815904 kubelet[2889]: I0120 02:36:13.815287 2889 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:36:13.815904 kubelet[2889]: I0120 02:36:13.815325 2889 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:36:13.816406 kubelet[2889]: I0120 02:36:13.816377 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:36:13.884536 kubelet[2889]: I0120 02:36:13.881765 2889 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:36:13.919937 kubelet[2889]: I0120 02:36:13.919888 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:36:13.925741 kubelet[2889]: I0120 02:36:13.925691 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1920566f175a8fb1774eb872fd23f484-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1920566f175a8fb1774eb872fd23f484\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:36:13.926103 kubelet[2889]: I0120 02:36:13.926068 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1920566f175a8fb1774eb872fd23f484-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1920566f175a8fb1774eb872fd23f484\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:36:13.926234 kubelet[2889]: I0120 02:36:13.926214 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:36:13.926338 kubelet[2889]: I0120 02:36:13.926307 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:36:13.926427 kubelet[2889]: I0120 02:36:13.926409 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1920566f175a8fb1774eb872fd23f484-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1920566f175a8fb1774eb872fd23f484\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:36:13.926520 kubelet[2889]: I0120 02:36:13.926503 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:36:13.933540 kubelet[2889]: I0120 02:36:13.886330 2889 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:36:13.933540 kubelet[2889]: E0120 02:36:13.919524 2889 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:36:13.976176 containerd[1618]: time="2026-01-20T02:36:13.974470869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:36:14.085347 kubelet[2889]: I0120 02:36:13.977104 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:36:14.085347 kubelet[2889]: I0120 02:36:13.977176 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:36:14.085347 kubelet[2889]: I0120 02:36:13.981497 2889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:36:14.119267 kubelet[2889]: E0120 02:36:14.118561 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:14.124389 kubelet[2889]: E0120 02:36:14.124352 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:14.128562 kubelet[2889]: E0120 02:36:14.128491 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:14.563712 kubelet[2889]: I0120 02:36:14.562502 2889 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:36:14.889049 kubelet[2889]: E0120 02:36:14.888933 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:15.075535 kubelet[2889]: E0120 02:36:14.904878 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:15.831393 kubelet[2889]: I0120 02:36:15.828433 2889 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:36:15.831393 kubelet[2889]: I0120 02:36:15.828583 2889 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:36:15.897326 kubelet[2889]: E0120 02:36:15.891764 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:16.968481 kubelet[2889]: I0120 02:36:16.968298 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d808bc63-f0f6-4446-b790-8d324e5894ac-lib-modules\") pod \"kube-proxy-bgbq5\" (UID: \"d808bc63-f0f6-4446-b790-8d324e5894ac\") " pod="kube-system/kube-proxy-bgbq5" Jan 20 02:36:17.030339 kubelet[2889]: I0120 02:36:16.972412 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d808bc63-f0f6-4446-b790-8d324e5894ac-kube-proxy\") pod \"kube-proxy-bgbq5\" (UID: \"d808bc63-f0f6-4446-b790-8d324e5894ac\") " pod="kube-system/kube-proxy-bgbq5" Jan 20 02:36:17.030339 kubelet[2889]: I0120 02:36:16.972479 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d808bc63-f0f6-4446-b790-8d324e5894ac-xtables-lock\") pod \"kube-proxy-bgbq5\" (UID: \"d808bc63-f0f6-4446-b790-8d324e5894ac\") " pod="kube-system/kube-proxy-bgbq5" Jan 20 02:36:17.030339 kubelet[2889]: I0120 02:36:16.972515 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt42q\" (UniqueName: \"kubernetes.io/projected/d808bc63-f0f6-4446-b790-8d324e5894ac-kube-api-access-dt42q\") pod \"kube-proxy-bgbq5\" (UID: \"d808bc63-f0f6-4446-b790-8d324e5894ac\") " pod="kube-system/kube-proxy-bgbq5" Jan 20 02:36:17.108302 systemd[1]: Created slice kubepods-besteffort-podd808bc63_f0f6_4446_b790_8d324e5894ac.slice - libcontainer container kubepods-besteffort-podd808bc63_f0f6_4446_b790_8d324e5894ac.slice. Jan 20 02:36:17.522885 kubelet[2889]: E0120 02:36:17.508746 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:17.570575 kubelet[2889]: E0120 02:36:17.566115 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:17.584233 containerd[1618]: time="2026-01-20T02:36:17.583962836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bgbq5,Uid:d808bc63-f0f6-4446-b790-8d324e5894ac,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:17.754480 containerd[1618]: time="2026-01-20T02:36:17.753204388Z" level=info msg="connecting to shim 38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922" address="unix:///run/containerd/s/2c21ad13abca84434a9e278418d1332c0b87fc757ef0891ed4e44a023382bce7" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:36:17.917305 kubelet[2889]: E0120 02:36:17.915601 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:17.989318 systemd[1]: Started cri-containerd-38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922.scope - libcontainer container 38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922. Jan 20 02:36:18.471614 containerd[1618]: time="2026-01-20T02:36:18.471414375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bgbq5,Uid:d808bc63-f0f6-4446-b790-8d324e5894ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922\"" Jan 20 02:36:18.487181 kubelet[2889]: E0120 02:36:18.486553 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:18.563909 containerd[1618]: time="2026-01-20T02:36:18.556716678Z" level=info msg="CreateContainer within sandbox \"38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:36:18.673637 containerd[1618]: time="2026-01-20T02:36:18.673528294Z" level=info msg="Container 047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:36:18.716554 containerd[1618]: time="2026-01-20T02:36:18.716360736Z" level=info msg="CreateContainer within sandbox \"38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa\"" Jan 20 02:36:18.718952 containerd[1618]: time="2026-01-20T02:36:18.718906628Z" level=info msg="StartContainer for \"047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa\"" Jan 20 02:36:18.733708 containerd[1618]: time="2026-01-20T02:36:18.733116048Z" level=info msg="connecting to shim 047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa" address="unix:///run/containerd/s/2c21ad13abca84434a9e278418d1332c0b87fc757ef0891ed4e44a023382bce7" protocol=ttrpc version=3 Jan 20 02:36:18.925311 systemd[1]: Started cri-containerd-047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa.scope - libcontainer container 047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa. Jan 20 02:36:19.702459 kubelet[2889]: E0120 02:36:19.702403 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:20.051239 containerd[1618]: time="2026-01-20T02:36:20.044811255Z" level=info msg="StartContainer for \"047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa\" returns successfully" Jan 20 02:36:20.258053 kubelet[2889]: E0120 02:36:20.248453 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:20.292077 kubelet[2889]: E0120 02:36:20.290094 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:20.531151 kubelet[2889]: E0120 02:36:20.528471 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:21.066485 kubelet[2889]: I0120 02:36:21.065287 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bgbq5" podStartSLOduration=9.065255907 podStartE2EDuration="9.065255907s" podCreationTimestamp="2026-01-20 02:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:36:20.999000786 +0000 UTC m=+10.053974266" watchObservedRunningTime="2026-01-20 02:36:21.065255907 +0000 UTC m=+10.120229406" Jan 20 02:36:21.340622 kubelet[2889]: E0120 02:36:21.328809 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:21.340622 kubelet[2889]: E0120 02:36:21.330619 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:23.055936 kubelet[2889]: I0120 02:36:23.034625 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5be531e4-1795-4ff0-b23e-8d2215836e98-run\") pod \"kube-flannel-ds-bhwn5\" (UID: \"5be531e4-1795-4ff0-b23e-8d2215836e98\") " pod="kube-flannel/kube-flannel-ds-bhwn5" Jan 20 02:36:23.055936 kubelet[2889]: I0120 02:36:23.042632 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5be531e4-1795-4ff0-b23e-8d2215836e98-cni-plugin\") pod \"kube-flannel-ds-bhwn5\" (UID: \"5be531e4-1795-4ff0-b23e-8d2215836e98\") " pod="kube-flannel/kube-flannel-ds-bhwn5" Jan 20 02:36:23.055936 kubelet[2889]: I0120 02:36:23.042884 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5be531e4-1795-4ff0-b23e-8d2215836e98-flannel-cfg\") pod \"kube-flannel-ds-bhwn5\" (UID: \"5be531e4-1795-4ff0-b23e-8d2215836e98\") " pod="kube-flannel/kube-flannel-ds-bhwn5" Jan 20 02:36:23.055936 kubelet[2889]: I0120 02:36:23.042917 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5t85\" (UniqueName: \"kubernetes.io/projected/5be531e4-1795-4ff0-b23e-8d2215836e98-kube-api-access-z5t85\") pod \"kube-flannel-ds-bhwn5\" (UID: \"5be531e4-1795-4ff0-b23e-8d2215836e98\") " pod="kube-flannel/kube-flannel-ds-bhwn5" Jan 20 02:36:23.055936 kubelet[2889]: I0120 02:36:23.043004 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5be531e4-1795-4ff0-b23e-8d2215836e98-xtables-lock\") pod \"kube-flannel-ds-bhwn5\" (UID: \"5be531e4-1795-4ff0-b23e-8d2215836e98\") " pod="kube-flannel/kube-flannel-ds-bhwn5" Jan 20 02:36:23.059782 kubelet[2889]: I0120 02:36:23.043105 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5be531e4-1795-4ff0-b23e-8d2215836e98-cni\") pod \"kube-flannel-ds-bhwn5\" (UID: \"5be531e4-1795-4ff0-b23e-8d2215836e98\") " pod="kube-flannel/kube-flannel-ds-bhwn5" Jan 20 02:36:23.099726 systemd[1]: Created slice kubepods-burstable-pod5be531e4_1795_4ff0_b23e_8d2215836e98.slice - libcontainer container kubepods-burstable-pod5be531e4_1795_4ff0_b23e_8d2215836e98.slice. Jan 20 02:36:23.489546 kubelet[2889]: E0120 02:36:23.486162 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:23.505515 containerd[1618]: time="2026-01-20T02:36:23.503535210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bhwn5,Uid:5be531e4-1795-4ff0-b23e-8d2215836e98,Namespace:kube-flannel,Attempt:0,}" Jan 20 02:36:23.844264 sudo[1784]: pam_unix(sudo:session): session closed for user root Jan 20 02:36:23.933176 sshd[1783]: Connection closed by 10.0.0.1 port 55382 Jan 20 02:36:23.940325 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:23.956050 systemd-logind[1595]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:36:24.146529 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:55382.service: Deactivated successfully. Jan 20 02:36:24.228340 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:36:24.242284 systemd[1]: session-6.scope: Consumed 8.091s CPU time, 221M memory peak. Jan 20 02:36:24.388421 containerd[1618]: time="2026-01-20T02:36:24.385768162Z" level=info msg="connecting to shim 51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494" address="unix:///run/containerd/s/663ae3d4502f2b35d3704715f9d641388abb3bb675964798818f7e6eb51d7808" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:36:24.403783 systemd-logind[1595]: Removed session 6. Jan 20 02:36:24.615153 systemd[1]: Started cri-containerd-51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494.scope - libcontainer container 51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494. Jan 20 02:36:25.199293 containerd[1618]: time="2026-01-20T02:36:25.199112430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bhwn5,Uid:5be531e4-1795-4ff0-b23e-8d2215836e98,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\"" Jan 20 02:36:25.223038 kubelet[2889]: E0120 02:36:25.222885 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:25.229952 containerd[1618]: time="2026-01-20T02:36:25.229080122Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 20 02:36:27.319813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539436564.mount: Deactivated successfully. Jan 20 02:36:27.673047 containerd[1618]: time="2026-01-20T02:36:27.671532040Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:27.686871 containerd[1618]: time="2026-01-20T02:36:27.686535681Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Jan 20 02:36:27.691902 containerd[1618]: time="2026-01-20T02:36:27.691706626Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:27.709341 containerd[1618]: time="2026-01-20T02:36:27.707306020Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:27.709341 containerd[1618]: time="2026-01-20T02:36:27.708573365Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.479380786s" Jan 20 02:36:27.709341 containerd[1618]: time="2026-01-20T02:36:27.708654624Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 20 02:36:27.731893 containerd[1618]: time="2026-01-20T02:36:27.729045936Z" level=info msg="CreateContainer within sandbox \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 02:36:27.871265 containerd[1618]: time="2026-01-20T02:36:27.865642270Z" level=info msg="Container 0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:36:27.940900 containerd[1618]: time="2026-01-20T02:36:27.940502160Z" level=info msg="CreateContainer within sandbox \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e\"" Jan 20 02:36:27.981070 containerd[1618]: time="2026-01-20T02:36:27.978295872Z" level=info msg="StartContainer for \"0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e\"" Jan 20 02:36:27.992727 containerd[1618]: time="2026-01-20T02:36:27.991127111Z" level=info msg="connecting to shim 0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e" address="unix:///run/containerd/s/663ae3d4502f2b35d3704715f9d641388abb3bb675964798818f7e6eb51d7808" protocol=ttrpc version=3 Jan 20 02:36:28.211755 systemd[1]: Started cri-containerd-0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e.scope - libcontainer container 0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e. Jan 20 02:36:28.657481 systemd[1]: cri-containerd-0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e.scope: Deactivated successfully. Jan 20 02:36:28.680713 containerd[1618]: time="2026-01-20T02:36:28.672939315Z" level=info msg="StartContainer for \"0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e\" returns successfully" Jan 20 02:36:28.715610 containerd[1618]: time="2026-01-20T02:36:28.715519982Z" level=info msg="received container exit event container_id:\"0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e\" id:\"0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e\" pid:3229 exited_at:{seconds:1768876588 nanos:708018136}" Jan 20 02:36:36.328734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e-rootfs.mount: Deactivated successfully. Jan 20 02:36:37.510941 kubelet[2889]: E0120 02:36:37.507954 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.419s" Jan 20 02:36:37.537402 kubelet[2889]: E0120 02:36:37.527154 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:37.568310 containerd[1618]: time="2026-01-20T02:36:37.554234788Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 20 02:36:37.613582 kubelet[2889]: E0120 02:36:37.613510 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:42.080915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540671561.mount: Deactivated successfully. Jan 20 02:36:49.693131 containerd[1618]: time="2026-01-20T02:36:49.692973542Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:49.703188 containerd[1618]: time="2026-01-20T02:36:49.702652806Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=25888700" Jan 20 02:36:49.711004 containerd[1618]: time="2026-01-20T02:36:49.706606257Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:49.732684 containerd[1618]: time="2026-01-20T02:36:49.732398816Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:36:49.737480 containerd[1618]: time="2026-01-20T02:36:49.736793742Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 12.182387278s" Jan 20 02:36:49.737480 containerd[1618]: time="2026-01-20T02:36:49.736873539Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 20 02:36:49.765626 containerd[1618]: time="2026-01-20T02:36:49.764080592Z" level=info msg="CreateContainer within sandbox \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 02:36:49.853952 containerd[1618]: time="2026-01-20T02:36:49.853795361Z" level=info msg="Container d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:36:49.979501 containerd[1618]: time="2026-01-20T02:36:49.978721504Z" level=info msg="CreateContainer within sandbox \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42\"" Jan 20 02:36:50.024590 containerd[1618]: time="2026-01-20T02:36:50.024536786Z" level=info msg="StartContainer for \"d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42\"" Jan 20 02:36:50.034893 containerd[1618]: time="2026-01-20T02:36:50.034474258Z" level=info msg="connecting to shim d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42" address="unix:///run/containerd/s/663ae3d4502f2b35d3704715f9d641388abb3bb675964798818f7e6eb51d7808" protocol=ttrpc version=3 Jan 20 02:36:50.323247 systemd[1]: Started cri-containerd-d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42.scope - libcontainer container d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42. Jan 20 02:36:50.709183 systemd[1]: cri-containerd-d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42.scope: Deactivated successfully. Jan 20 02:36:50.723146 containerd[1618]: time="2026-01-20T02:36:50.716798002Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be531e4_1795_4ff0_b23e_8d2215836e98.slice/cri-containerd-d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42.scope/memory.events\": no such file or directory" Jan 20 02:36:50.753434 containerd[1618]: time="2026-01-20T02:36:50.749329572Z" level=info msg="received container exit event container_id:\"d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42\" id:\"d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42\" pid:3298 exited_at:{seconds:1768876610 nanos:734436988}" Jan 20 02:36:50.768556 containerd[1618]: time="2026-01-20T02:36:50.766002454Z" level=info msg="StartContainer for \"d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42\" returns successfully" Jan 20 02:36:50.818216 kubelet[2889]: I0120 02:36:50.813690 2889 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 02:36:56.618420 kubelet[2889]: E0120 02:36:56.617385 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.505s" Jan 20 02:36:58.288286 kubelet[2889]: E0120 02:36:58.287142 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.67s" Jan 20 02:36:58.438656 kubelet[2889]: E0120 02:36:58.435430 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:58.515963 kubelet[2889]: I0120 02:36:58.508951 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghf8g\" (UniqueName: \"kubernetes.io/projected/f8e66bb5-534d-4cf9-b951-37a0ceef4ecf-kube-api-access-ghf8g\") pod \"coredns-668d6bf9bc-9z5t5\" (UID: \"f8e66bb5-534d-4cf9-b951-37a0ceef4ecf\") " pod="kube-system/coredns-668d6bf9bc-9z5t5" Jan 20 02:36:58.528260 kubelet[2889]: I0120 02:36:58.515122 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4398aea8-6855-45eb-b647-acec95e61b4f-config-volume\") pod \"coredns-668d6bf9bc-brwzc\" (UID: \"4398aea8-6855-45eb-b647-acec95e61b4f\") " pod="kube-system/coredns-668d6bf9bc-brwzc" Jan 20 02:36:58.528260 kubelet[2889]: I0120 02:36:58.523112 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8e66bb5-534d-4cf9-b951-37a0ceef4ecf-config-volume\") pod \"coredns-668d6bf9bc-9z5t5\" (UID: \"f8e66bb5-534d-4cf9-b951-37a0ceef4ecf\") " pod="kube-system/coredns-668d6bf9bc-9z5t5" Jan 20 02:36:58.526894 systemd[1]: Created slice kubepods-burstable-pod4398aea8_6855_45eb_b647_acec95e61b4f.slice - libcontainer container kubepods-burstable-pod4398aea8_6855_45eb_b647_acec95e61b4f.slice. Jan 20 02:36:58.529958 kubelet[2889]: I0120 02:36:58.529771 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grlm\" (UniqueName: \"kubernetes.io/projected/4398aea8-6855-45eb-b647-acec95e61b4f-kube-api-access-6grlm\") pod \"coredns-668d6bf9bc-brwzc\" (UID: \"4398aea8-6855-45eb-b647-acec95e61b4f\") " pod="kube-system/coredns-668d6bf9bc-brwzc" Jan 20 02:36:58.618631 systemd[1]: Created slice kubepods-burstable-podf8e66bb5_534d_4cf9_b951_37a0ceef4ecf.slice - libcontainer container kubepods-burstable-podf8e66bb5_534d_4cf9_b951_37a0ceef4ecf.slice. Jan 20 02:36:58.683710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42-rootfs.mount: Deactivated successfully. Jan 20 02:36:58.920895 kubelet[2889]: E0120 02:36:58.873465 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:58.926499 containerd[1618]: time="2026-01-20T02:36:58.879690589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brwzc,Uid:4398aea8-6855-45eb-b647-acec95e61b4f,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:58.955632 kubelet[2889]: E0120 02:36:58.948542 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:58.955979 containerd[1618]: time="2026-01-20T02:36:58.950000581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9z5t5,Uid:f8e66bb5-534d-4cf9-b951-37a0ceef4ecf,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:59.470472 kubelet[2889]: E0120 02:36:59.465307 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:59.490896 containerd[1618]: time="2026-01-20T02:36:59.489662386Z" level=info msg="CreateContainer within sandbox \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 02:36:59.639283 containerd[1618]: time="2026-01-20T02:36:59.638610654Z" level=info msg="Container a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:36:59.686012 systemd[1]: run-netns-cni\x2dad22be59\x2ddb76\x2d99a9\x2da47b\x2df9df79d5c868.mount: Deactivated successfully. Jan 20 02:36:59.687234 systemd[1]: run-netns-cni\x2dfcb821c1\x2d5ed4\x2d5fdb\x2dee70\x2d7982e43fdeb5.mount: Deactivated successfully. Jan 20 02:36:59.706883 containerd[1618]: time="2026-01-20T02:36:59.706663977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brwzc,Uid:4398aea8-6855-45eb-b647-acec95e61b4f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdd7aae6f6362a796fbdf129ebd563cf9c43e254a678305604e05d33c2b900b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:36:59.707928 containerd[1618]: time="2026-01-20T02:36:59.706941407Z" level=info msg="CreateContainer within sandbox \"51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5\"" Jan 20 02:36:59.708080 kubelet[2889]: E0120 02:36:59.707499 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdd7aae6f6362a796fbdf129ebd563cf9c43e254a678305604e05d33c2b900b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:36:59.708080 kubelet[2889]: E0120 02:36:59.707576 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdd7aae6f6362a796fbdf129ebd563cf9c43e254a678305604e05d33c2b900b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-brwzc" Jan 20 02:36:59.708080 kubelet[2889]: E0120 02:36:59.707604 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdd7aae6f6362a796fbdf129ebd563cf9c43e254a678305604e05d33c2b900b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-brwzc" Jan 20 02:36:59.708080 kubelet[2889]: E0120 02:36:59.707648 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-brwzc_kube-system(4398aea8-6855-45eb-b647-acec95e61b4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-brwzc_kube-system(4398aea8-6855-45eb-b647-acec95e61b4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bdd7aae6f6362a796fbdf129ebd563cf9c43e254a678305604e05d33c2b900b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-brwzc" podUID="4398aea8-6855-45eb-b647-acec95e61b4f" Jan 20 02:36:59.734301 kubelet[2889]: E0120 02:36:59.713248 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce31043f4ae4496f9408095acafae6a220f6cafff968d61add852f65965c387b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:36:59.734301 kubelet[2889]: E0120 02:36:59.713914 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce31043f4ae4496f9408095acafae6a220f6cafff968d61add852f65965c387b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9z5t5" Jan 20 02:36:59.734301 kubelet[2889]: E0120 02:36:59.714391 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce31043f4ae4496f9408095acafae6a220f6cafff968d61add852f65965c387b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9z5t5" Jan 20 02:36:59.734301 kubelet[2889]: E0120 02:36:59.714724 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9z5t5_kube-system(f8e66bb5-534d-4cf9-b951-37a0ceef4ecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9z5t5_kube-system(f8e66bb5-534d-4cf9-b951-37a0ceef4ecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce31043f4ae4496f9408095acafae6a220f6cafff968d61add852f65965c387b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-9z5t5" podUID="f8e66bb5-534d-4cf9-b951-37a0ceef4ecf" Jan 20 02:36:59.735764 containerd[1618]: time="2026-01-20T02:36:59.709221895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9z5t5,Uid:f8e66bb5-534d-4cf9-b951-37a0ceef4ecf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce31043f4ae4496f9408095acafae6a220f6cafff968d61add852f65965c387b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:36:59.735764 containerd[1618]: time="2026-01-20T02:36:59.724511793Z" level=info msg="StartContainer for \"a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5\"" Jan 20 02:36:59.735764 containerd[1618]: time="2026-01-20T02:36:59.726126627Z" level=info msg="connecting to shim a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5" address="unix:///run/containerd/s/663ae3d4502f2b35d3704715f9d641388abb3bb675964798818f7e6eb51d7808" protocol=ttrpc version=3 Jan 20 02:36:59.879746 systemd[1]: Started cri-containerd-a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5.scope - libcontainer container a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5. Jan 20 02:37:00.558637 containerd[1618]: time="2026-01-20T02:37:00.558587737Z" level=info msg="StartContainer for \"a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5\" returns successfully" Jan 20 02:37:01.610269 kubelet[2889]: E0120 02:37:01.610219 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:01.817192 kubelet[2889]: I0120 02:37:01.809732 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-bhwn5" podStartSLOduration=15.296315534 podStartE2EDuration="39.809706764s" podCreationTimestamp="2026-01-20 02:36:22 +0000 UTC" firstStartedPulling="2026-01-20 02:36:25.228209259 +0000 UTC m=+14.283182738" lastFinishedPulling="2026-01-20 02:36:49.741600488 +0000 UTC m=+38.796573968" observedRunningTime="2026-01-20 02:37:01.80802216 +0000 UTC m=+50.862995661" watchObservedRunningTime="2026-01-20 02:37:01.809706764 +0000 UTC m=+50.864680244" Jan 20 02:37:02.201263 systemd-networkd[1513]: flannel.1: Link UP Jan 20 02:37:02.201431 systemd-networkd[1513]: flannel.1: Gained carrier Jan 20 02:37:02.655202 kubelet[2889]: E0120 02:37:02.638906 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:03.771947 systemd-networkd[1513]: flannel.1: Gained IPv6LL Jan 20 02:37:12.085170 kubelet[2889]: E0120 02:37:12.078686 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:12.097652 containerd[1618]: time="2026-01-20T02:37:12.083316604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brwzc,Uid:4398aea8-6855-45eb-b647-acec95e61b4f,Namespace:kube-system,Attempt:0,}" Jan 20 02:37:12.230297 systemd-networkd[1513]: cni0: Link UP Jan 20 02:37:12.230316 systemd-networkd[1513]: cni0: Gained carrier Jan 20 02:37:12.302201 systemd-networkd[1513]: cni0: Lost carrier Jan 20 02:37:12.450753 systemd-networkd[1513]: veth6bb983e4: Link UP Jan 20 02:37:12.477723 kernel: cni0: port 1(veth6bb983e4) entered blocking state Jan 20 02:37:12.477978 kernel: cni0: port 1(veth6bb983e4) entered disabled state Jan 20 02:37:12.478083 kernel: veth6bb983e4: entered allmulticast mode Jan 20 02:37:12.495315 kernel: veth6bb983e4: entered promiscuous mode Jan 20 02:37:12.639463 kernel: cni0: port 1(veth6bb983e4) entered blocking state Jan 20 02:37:12.639623 kernel: cni0: port 1(veth6bb983e4) entered forwarding state Jan 20 02:37:12.630494 systemd-networkd[1513]: veth6bb983e4: Gained carrier Jan 20 02:37:12.631231 systemd-networkd[1513]: cni0: Gained carrier Jan 20 02:37:12.686550 containerd[1618]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 20 02:37:12.686550 containerd[1618]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:37:12.894795 containerd[1618]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:37:12.892357797Z" level=info msg="connecting to shim 5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5" address="unix:///run/containerd/s/4dfb3e2abe91684c5919e675990cc168429971a6ba1290cae2a334fb55d1d35b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:37:13.057929 systemd[1]: Started cri-containerd-5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5.scope - libcontainer container 5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5. Jan 20 02:37:13.095586 kubelet[2889]: E0120 02:37:13.093694 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:13.096261 containerd[1618]: time="2026-01-20T02:37:13.095157971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9z5t5,Uid:f8e66bb5-534d-4cf9-b951-37a0ceef4ecf,Namespace:kube-system,Attempt:0,}" Jan 20 02:37:13.502734 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:37:13.617740 systemd-networkd[1513]: vethec335ae7: Link UP Jan 20 02:37:13.713123 kernel: cni0: port 2(vethec335ae7) entered blocking state Jan 20 02:37:13.785765 kernel: cni0: port 2(vethec335ae7) entered disabled state Jan 20 02:37:13.785873 kernel: vethec335ae7: entered allmulticast mode Jan 20 02:37:13.785911 kernel: vethec335ae7: entered promiscuous mode Jan 20 02:37:13.874256 kernel: cni0: port 2(vethec335ae7) entered blocking state Jan 20 02:37:13.874411 kernel: cni0: port 2(vethec335ae7) entered forwarding state Jan 20 02:37:13.874682 systemd-networkd[1513]: vethec335ae7: Gained carrier Jan 20 02:37:14.003613 containerd[1618]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 20 02:37:14.003613 containerd[1618]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:37:14.091685 systemd-networkd[1513]: cni0: Gained IPv6LL Jan 20 02:37:14.600604 containerd[1618]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:37:14.598467907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brwzc,Uid:4398aea8-6855-45eb-b647-acec95e61b4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5\"" Jan 20 02:37:14.616894 kubelet[2889]: E0120 02:37:14.613753 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:14.642878 systemd-networkd[1513]: veth6bb983e4: Gained IPv6LL Jan 20 02:37:14.661092 containerd[1618]: time="2026-01-20T02:37:14.660553572Z" level=info msg="CreateContainer within sandbox \"5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:37:14.870603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775723383.mount: Deactivated successfully. Jan 20 02:37:14.952523 containerd[1618]: time="2026-01-20T02:37:14.943907766Z" level=info msg="connecting to shim b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907" address="unix:///run/containerd/s/0491e11aa80e8db24d08cfa035d2c05f875d9980f3d8ac4febef44e765404103" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:37:14.970505 systemd-networkd[1513]: vethec335ae7: Gained IPv6LL Jan 20 02:37:15.014040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782997385.mount: Deactivated successfully. Jan 20 02:37:15.036605 containerd[1618]: time="2026-01-20T02:37:15.014881742Z" level=info msg="Container 1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:15.142379 containerd[1618]: time="2026-01-20T02:37:15.116912838Z" level=info msg="CreateContainer within sandbox \"5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48\"" Jan 20 02:37:15.142379 containerd[1618]: time="2026-01-20T02:37:15.138191781Z" level=info msg="StartContainer for \"1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48\"" Jan 20 02:37:15.249635 containerd[1618]: time="2026-01-20T02:37:15.249575832Z" level=info msg="connecting to shim 1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48" address="unix:///run/containerd/s/4dfb3e2abe91684c5919e675990cc168429971a6ba1290cae2a334fb55d1d35b" protocol=ttrpc version=3 Jan 20 02:37:15.636915 systemd[1]: Started cri-containerd-1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48.scope - libcontainer container 1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48. Jan 20 02:37:15.662573 systemd[1]: Started cri-containerd-b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907.scope - libcontainer container b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907. Jan 20 02:37:15.882120 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:37:16.549123 containerd[1618]: time="2026-01-20T02:37:16.545919008Z" level=info msg="StartContainer for \"1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48\" returns successfully" Jan 20 02:37:16.632739 containerd[1618]: time="2026-01-20T02:37:16.632628722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9z5t5,Uid:f8e66bb5-534d-4cf9-b951-37a0ceef4ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907\"" Jan 20 02:37:16.644130 kubelet[2889]: E0120 02:37:16.642417 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:16.691352 containerd[1618]: time="2026-01-20T02:37:16.682606116Z" level=info msg="CreateContainer within sandbox \"b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:37:16.865664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162243365.mount: Deactivated successfully. Jan 20 02:37:16.930899 containerd[1618]: time="2026-01-20T02:37:16.930191714Z" level=info msg="Container 310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:16.993683 kubelet[2889]: E0120 02:37:16.993637 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:17.088297 containerd[1618]: time="2026-01-20T02:37:17.087084011Z" level=info msg="CreateContainer within sandbox \"b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf\"" Jan 20 02:37:17.127753 containerd[1618]: time="2026-01-20T02:37:17.102289776Z" level=info msg="StartContainer for \"310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf\"" Jan 20 02:37:17.127753 containerd[1618]: time="2026-01-20T02:37:17.124207256Z" level=info msg="connecting to shim 310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf" address="unix:///run/containerd/s/0491e11aa80e8db24d08cfa035d2c05f875d9980f3d8ac4febef44e765404103" protocol=ttrpc version=3 Jan 20 02:37:17.532420 systemd[1]: Started cri-containerd-310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf.scope - libcontainer container 310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf. Jan 20 02:37:17.898207 containerd[1618]: time="2026-01-20T02:37:17.896664890Z" level=info msg="StartContainer for \"310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf\" returns successfully" Jan 20 02:37:18.047633 kubelet[2889]: E0120 02:37:18.045658 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:18.055264 kubelet[2889]: E0120 02:37:18.052962 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:18.288234 kubelet[2889]: I0120 02:37:18.286443 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-brwzc" podStartSLOduration=67.286421423 podStartE2EDuration="1m7.286421423s" podCreationTimestamp="2026-01-20 02:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:37:17.21523623 +0000 UTC m=+66.270209729" watchObservedRunningTime="2026-01-20 02:37:18.286421423 +0000 UTC m=+67.341394903" Jan 20 02:37:18.288234 kubelet[2889]: I0120 02:37:18.286567 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9z5t5" podStartSLOduration=67.286557696 podStartE2EDuration="1m7.286557696s" podCreationTimestamp="2026-01-20 02:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:37:18.27867832 +0000 UTC m=+67.333651831" watchObservedRunningTime="2026-01-20 02:37:18.286557696 +0000 UTC m=+67.341531216" Jan 20 02:37:19.051169 kubelet[2889]: E0120 02:37:19.049053 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:19.054049 kubelet[2889]: E0120 02:37:19.053507 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:20.062687 kubelet[2889]: E0120 02:37:20.062181 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:21.083542 kubelet[2889]: E0120 02:37:21.081540 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:30.082120 kubelet[2889]: E0120 02:37:30.073654 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:42.087041 kubelet[2889]: E0120 02:37:42.071647 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:43.837737 systemd[1715]: Created slice background.slice - User Background Tasks Slice. Jan 20 02:37:43.853301 systemd[1715]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 02:37:44.046768 systemd[1715]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 02:37:45.074626 kubelet[2889]: E0120 02:37:45.073939 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:50.075733 kubelet[2889]: E0120 02:37:50.075061 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:59.191025 kubelet[2889]: E0120 02:37:59.184024 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.095s" Jan 20 02:38:19.384699 systemd[1]: cri-containerd-d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f.scope: Deactivated successfully. Jan 20 02:38:19.386372 systemd[1]: cri-containerd-d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f.scope: Consumed 5.522s CPU time, 20.9M memory peak. Jan 20 02:38:19.574237 containerd[1618]: time="2026-01-20T02:38:19.567661455Z" level=info msg="received container exit event container_id:\"d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f\" id:\"d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f\" pid:2743 exit_status:1 exited_at:{seconds:1768876699 nanos:535543008}" Jan 20 02:38:19.654503 kubelet[2889]: E0120 02:38:19.654459 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.469s" Jan 20 02:38:19.685988 kubelet[2889]: E0120 02:38:19.682617 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:20.405454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f-rootfs.mount: Deactivated successfully. Jan 20 02:38:20.681314 kubelet[2889]: I0120 02:38:20.675517 2889 scope.go:117] "RemoveContainer" containerID="d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f" Jan 20 02:38:20.681314 kubelet[2889]: E0120 02:38:20.675759 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:20.732015 containerd[1618]: time="2026-01-20T02:38:20.726181949Z" level=info msg="CreateContainer within sandbox \"b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 02:38:20.840761 containerd[1618]: time="2026-01-20T02:38:20.839101199Z" level=info msg="Container 3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:20.956306 containerd[1618]: time="2026-01-20T02:38:20.954261638Z" level=info msg="CreateContainer within sandbox \"b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6\"" Jan 20 02:38:20.972019 containerd[1618]: time="2026-01-20T02:38:20.969073758Z" level=info msg="StartContainer for \"3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6\"" Jan 20 02:38:20.987767 containerd[1618]: time="2026-01-20T02:38:20.985361727Z" level=info msg="connecting to shim 3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6" address="unix:///run/containerd/s/2c6f1df074f26837b58e72c66949f6d981bebcb7a8532966a7fdfff4cc06c4bb" protocol=ttrpc version=3 Jan 20 02:38:21.221681 systemd[1]: Started cri-containerd-3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6.scope - libcontainer container 3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6. Jan 20 02:38:21.760298 containerd[1618]: time="2026-01-20T02:38:21.760201777Z" level=info msg="StartContainer for \"3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6\" returns successfully" Jan 20 02:38:22.073999 kubelet[2889]: E0120 02:38:22.072149 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:22.903759 kubelet[2889]: E0120 02:38:22.900725 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:23.902784 kubelet[2889]: E0120 02:38:23.897291 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:24.075388 kubelet[2889]: E0120 02:38:24.072415 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:24.913159 kubelet[2889]: E0120 02:38:24.913005 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:29.722633 kubelet[2889]: E0120 02:38:29.721714 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:39.770704 kubelet[2889]: E0120 02:38:39.769805 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:40.134645 kubelet[2889]: E0120 02:38:40.127428 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:57.105879 kubelet[2889]: E0120 02:38:57.105011 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:03.074568 kubelet[2889]: E0120 02:39:03.071708 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:15.077412 kubelet[2889]: E0120 02:39:15.076143 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:28.080088 kubelet[2889]: E0120 02:39:28.079430 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:31.092625 kubelet[2889]: E0120 02:39:31.080281 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:37.073326 kubelet[2889]: E0120 02:39:37.073271 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:59.076638 kubelet[2889]: E0120 02:39:59.072197 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:08.864514 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:37998.service - OpenSSH per-connection server daemon (10.0.0.1:37998). Jan 20 02:40:09.312537 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 37998 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:09.329917 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:09.393445 systemd-logind[1595]: New session 7 of user core. Jan 20 02:40:09.409432 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:40:10.085714 sshd[4476]: Connection closed by 10.0.0.1 port 37998 Jan 20 02:40:10.088134 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:10.117392 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:37998.service: Deactivated successfully. Jan 20 02:40:10.134385 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:40:10.148341 systemd-logind[1595]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:40:10.157009 systemd-logind[1595]: Removed session 7. Jan 20 02:40:12.072813 kubelet[2889]: E0120 02:40:12.072050 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:14.076781 kubelet[2889]: E0120 02:40:14.075186 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:15.130380 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:50790.service - OpenSSH per-connection server daemon (10.0.0.1:50790). Jan 20 02:40:15.408963 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 50790 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:15.426754 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:15.451903 systemd-logind[1595]: New session 8 of user core. Jan 20 02:40:15.487398 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:40:16.158813 sshd[4524]: Connection closed by 10.0.0.1 port 50790 Jan 20 02:40:16.165947 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:16.189320 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:50790.service: Deactivated successfully. Jan 20 02:40:16.209294 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:40:16.227253 systemd-logind[1595]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:40:16.244607 systemd-logind[1595]: Removed session 8. Jan 20 02:40:21.233956 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:50834.service - OpenSSH per-connection server daemon (10.0.0.1:50834). Jan 20 02:40:21.685762 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 50834 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:21.700771 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:21.740271 systemd-logind[1595]: New session 9 of user core. Jan 20 02:40:21.785967 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:40:22.531318 sshd[4583]: Connection closed by 10.0.0.1 port 50834 Jan 20 02:40:22.530127 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:22.551504 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:50834.service: Deactivated successfully. Jan 20 02:40:22.574348 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:40:22.582588 systemd-logind[1595]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:40:22.600776 systemd-logind[1595]: Removed session 9. Jan 20 02:40:26.591810 containerd[1618]: time="2026-01-20T02:40:26.590150330Z" level=info msg="container event discarded" container=491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39 type=CONTAINER_CREATED_EVENT Jan 20 02:40:26.591810 containerd[1618]: time="2026-01-20T02:40:26.590270683Z" level=info msg="container event discarded" container=491ca1444a259eca1d7442f6e71a23aa604ad94d7c8779413e68338acd988b39 type=CONTAINER_STARTED_EVENT Jan 20 02:40:27.105634 containerd[1618]: time="2026-01-20T02:40:26.999324782Z" level=info msg="container event discarded" container=c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860 type=CONTAINER_CREATED_EVENT Jan 20 02:40:27.105634 containerd[1618]: time="2026-01-20T02:40:27.002012302Z" level=info msg="container event discarded" container=c2244bae81b9b5f2a86c61f609170762a9bcbc5b65fe9d2d555b819a2b0d5860 type=CONTAINER_STARTED_EVENT Jan 20 02:40:27.205261 containerd[1618]: time="2026-01-20T02:40:27.131469368Z" level=info msg="container event discarded" container=b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4 type=CONTAINER_CREATED_EVENT Jan 20 02:40:27.205261 containerd[1618]: time="2026-01-20T02:40:27.193404374Z" level=info msg="container event discarded" container=35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f type=CONTAINER_CREATED_EVENT Jan 20 02:40:27.690410 containerd[1618]: time="2026-01-20T02:40:27.322779015Z" level=info msg="container event discarded" container=b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8 type=CONTAINER_CREATED_EVENT Jan 20 02:40:27.690410 containerd[1618]: time="2026-01-20T02:40:27.323600316Z" level=info msg="container event discarded" container=b7903211cb72abd1ba299ab38332090b778fcfcd540d4bf1e0b9834f11b359a8 type=CONTAINER_STARTED_EVENT Jan 20 02:40:27.690410 containerd[1618]: time="2026-01-20T02:40:27.383084633Z" level=info msg="container event discarded" container=d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f type=CONTAINER_CREATED_EVENT Jan 20 02:40:27.837354 containerd[1618]: time="2026-01-20T02:40:27.821422026Z" level=info msg="container event discarded" container=35419ff581ca467f1717983ccec541355bfa0555bd91f4dbc41effa1d86d796f type=CONTAINER_STARTED_EVENT Jan 20 02:40:28.090813 containerd[1618]: time="2026-01-20T02:40:28.084311875Z" level=info msg="container event discarded" container=b0f3875644120d7787534418397dc80f15f8b12276d60bc45d24582bd89767e4 type=CONTAINER_STARTED_EVENT Jan 20 02:40:28.517148 containerd[1618]: time="2026-01-20T02:40:28.366327050Z" level=info msg="container event discarded" container=d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f type=CONTAINER_STARTED_EVENT Jan 20 02:40:33.732321 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Jan 20 02:40:35.434762 sshd[4626]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:35.457484 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:35.518622 systemd-logind[1595]: New session 10 of user core. Jan 20 02:40:35.613595 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:40:36.346092 sshd[4646]: Connection closed by 10.0.0.1 port 48216 Jan 20 02:40:36.356628 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:36.380595 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:48216.service: Deactivated successfully. Jan 20 02:40:36.396434 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:40:36.412647 systemd-logind[1595]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:40:36.426071 systemd-logind[1595]: Removed session 10. Jan 20 02:40:41.422628 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:51262.service - OpenSSH per-connection server daemon (10.0.0.1:51262). Jan 20 02:40:41.745059 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 51262 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:41.767353 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:41.818969 systemd-logind[1595]: New session 11 of user core. Jan 20 02:40:41.834892 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:40:42.432708 sshd[4686]: Connection closed by 10.0.0.1 port 51262 Jan 20 02:40:42.432308 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:42.477702 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:51262.service: Deactivated successfully. Jan 20 02:40:42.501702 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:40:42.517115 systemd-logind[1595]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:40:42.534769 systemd-logind[1595]: Removed session 11. Jan 20 02:40:44.092219 kubelet[2889]: E0120 02:40:44.090805 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:47.527495 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:48104.service - OpenSSH per-connection server daemon (10.0.0.1:48104). Jan 20 02:40:47.903135 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 48104 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:47.913169 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:47.967239 systemd-logind[1595]: New session 12 of user core. Jan 20 02:40:48.034009 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:40:48.784044 sshd[4726]: Connection closed by 10.0.0.1 port 48104 Jan 20 02:40:48.780078 sshd-session[4722]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:48.806715 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:48104.service: Deactivated successfully. Jan 20 02:40:48.824994 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:40:48.840943 systemd-logind[1595]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:40:48.859647 systemd-logind[1595]: Removed session 12. Jan 20 02:40:51.074886 kubelet[2889]: E0120 02:40:51.072494 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:52.073881 kubelet[2889]: E0120 02:40:52.073385 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:53.074066 kubelet[2889]: E0120 02:40:53.073414 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:53.867392 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:48126.service - OpenSSH per-connection server daemon (10.0.0.1:48126). Jan 20 02:40:54.056517 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:54.059732 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:54.105450 systemd-logind[1595]: New session 13 of user core. Jan 20 02:40:54.127729 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:40:54.915035 sshd[4771]: Connection closed by 10.0.0.1 port 48126 Jan 20 02:40:54.920690 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:54.956716 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:48126.service: Deactivated successfully. Jan 20 02:40:54.961001 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:40:54.965204 systemd-logind[1595]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:40:54.986332 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Jan 20 02:40:54.994951 systemd-logind[1595]: Removed session 13. Jan 20 02:40:55.227235 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:55.232520 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:55.262055 systemd-logind[1595]: New session 14 of user core. Jan 20 02:40:55.297280 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:40:56.286322 sshd[4793]: Connection closed by 10.0.0.1 port 40656 Jan 20 02:40:56.306470 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:56.355758 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:40656.service: Deactivated successfully. Jan 20 02:40:56.362635 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:40:56.368246 systemd-logind[1595]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:40:56.376088 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:40682.service - OpenSSH per-connection server daemon (10.0.0.1:40682). Jan 20 02:40:56.378774 systemd-logind[1595]: Removed session 14. Jan 20 02:40:56.691112 sshd[4819]: Accepted publickey for core from 10.0.0.1 port 40682 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:40:56.708056 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:40:56.821749 systemd-logind[1595]: New session 15 of user core. Jan 20 02:40:56.854144 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:40:57.293263 sshd[4823]: Connection closed by 10.0.0.1 port 40682 Jan 20 02:40:57.292938 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Jan 20 02:40:57.308680 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:40682.service: Deactivated successfully. Jan 20 02:40:57.319430 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:40:57.328473 systemd-logind[1595]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:40:57.330431 systemd-logind[1595]: Removed session 15. Jan 20 02:41:01.948587 kubelet[2889]: E0120 02:41:01.942445 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.429s" Jan 20 02:41:02.395608 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:40712.service - OpenSSH per-connection server daemon (10.0.0.1:40712). Jan 20 02:41:02.765437 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 40712 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:02.785778 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:02.838454 systemd-logind[1595]: New session 16 of user core. Jan 20 02:41:02.868458 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:41:04.348467 kubelet[2889]: E0120 02:41:04.346675 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:04.577499 kubelet[2889]: E0120 02:41:04.576905 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.462s" Jan 20 02:41:05.213787 sshd[4861]: Connection closed by 10.0.0.1 port 40712 Jan 20 02:41:05.217476 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:05.242189 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:40712.service: Deactivated successfully. Jan 20 02:41:05.275755 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:41:05.277678 systemd-logind[1595]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:41:05.306947 systemd-logind[1595]: Removed session 16. Jan 20 02:41:10.279677 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:36526.service - OpenSSH per-connection server daemon (10.0.0.1:36526). Jan 20 02:41:10.552449 sshd[4895]: Accepted publickey for core from 10.0.0.1 port 36526 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:10.573333 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:10.600047 systemd-logind[1595]: New session 17 of user core. Jan 20 02:41:10.621531 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:41:11.361488 sshd[4899]: Connection closed by 10.0.0.1 port 36526 Jan 20 02:41:11.360754 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:11.405910 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:36526.service: Deactivated successfully. Jan 20 02:41:11.409983 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:41:11.425092 systemd-logind[1595]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:41:11.434433 systemd-logind[1595]: Removed session 17. Jan 20 02:41:16.417307 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:52748.service - OpenSSH per-connection server daemon (10.0.0.1:52748). Jan 20 02:41:16.762932 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 52748 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:16.778707 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:16.830463 systemd-logind[1595]: New session 18 of user core. Jan 20 02:41:16.874284 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:41:17.644987 sshd[4939]: Connection closed by 10.0.0.1 port 52748 Jan 20 02:41:17.642150 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:17.684117 systemd-logind[1595]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:41:17.691640 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:52748.service: Deactivated successfully. Jan 20 02:41:17.715607 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:41:17.746768 systemd-logind[1595]: Removed session 18. Jan 20 02:41:18.486314 containerd[1618]: time="2026-01-20T02:41:18.481967973Z" level=info msg="container event discarded" container=38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922 type=CONTAINER_CREATED_EVENT Jan 20 02:41:18.486314 containerd[1618]: time="2026-01-20T02:41:18.483161736Z" level=info msg="container event discarded" container=38194fd7276e29a935065ce27ddd067eaa459ef2f637032c212c433effde2922 type=CONTAINER_STARTED_EVENT Jan 20 02:41:18.720208 containerd[1618]: time="2026-01-20T02:41:18.719990968Z" level=info msg="container event discarded" container=047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa type=CONTAINER_CREATED_EVENT Jan 20 02:41:20.044847 containerd[1618]: time="2026-01-20T02:41:20.044271242Z" level=info msg="container event discarded" container=047c9c456a14e7560381299a68999f978becab7b54b7148407ec376795bbcdaa type=CONTAINER_STARTED_EVENT Jan 20 02:41:22.763688 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:52766.service - OpenSSH per-connection server daemon (10.0.0.1:52766). Jan 20 02:41:23.103573 kubelet[2889]: E0120 02:41:23.095074 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:23.257791 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 52766 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:23.267409 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:23.304683 systemd-logind[1595]: New session 19 of user core. Jan 20 02:41:23.328146 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:41:24.010026 sshd[5001]: Connection closed by 10.0.0.1 port 52766 Jan 20 02:41:24.011268 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:24.036794 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:52766.service: Deactivated successfully. Jan 20 02:41:24.059510 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:41:24.090700 systemd-logind[1595]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:41:24.102061 systemd-logind[1595]: Removed session 19. Jan 20 02:41:25.213619 containerd[1618]: time="2026-01-20T02:41:25.211350497Z" level=info msg="container event discarded" container=51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494 type=CONTAINER_CREATED_EVENT Jan 20 02:41:25.213619 containerd[1618]: time="2026-01-20T02:41:25.211421048Z" level=info msg="container event discarded" container=51fcb9e3aef183ac9ca4d4aa72db04c8693481e43267efb5f00c17f5ec700494 type=CONTAINER_STARTED_EVENT Jan 20 02:41:27.970597 containerd[1618]: time="2026-01-20T02:41:27.970520100Z" level=info msg="container event discarded" container=0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e type=CONTAINER_CREATED_EVENT Jan 20 02:41:28.686635 containerd[1618]: time="2026-01-20T02:41:28.685760085Z" level=info msg="container event discarded" container=0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e type=CONTAINER_STARTED_EVENT Jan 20 02:41:29.064871 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:39424.service - OpenSSH per-connection server daemon (10.0.0.1:39424). Jan 20 02:41:29.364889 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 39424 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:29.372094 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:29.420987 systemd-logind[1595]: New session 20 of user core. Jan 20 02:41:29.445556 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:41:30.006340 sshd[5041]: Connection closed by 10.0.0.1 port 39424 Jan 20 02:41:30.007156 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:30.023977 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:39424.service: Deactivated successfully. Jan 20 02:41:30.033656 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:41:30.043469 systemd-logind[1595]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:41:30.059774 systemd-logind[1595]: Removed session 20. Jan 20 02:41:35.081052 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:47772.service - OpenSSH per-connection server daemon (10.0.0.1:47772). Jan 20 02:41:35.512266 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 47772 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:35.506539 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:35.562201 systemd-logind[1595]: New session 21 of user core. Jan 20 02:41:35.579881 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:41:36.058576 sshd[5080]: Connection closed by 10.0.0.1 port 47772 Jan 20 02:41:36.061155 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:36.084769 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:47772.service: Deactivated successfully. Jan 20 02:41:36.099490 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:41:36.118068 systemd-logind[1595]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:41:36.133288 systemd-logind[1595]: Removed session 21. Jan 20 02:41:37.454418 containerd[1618]: time="2026-01-20T02:41:37.454037542Z" level=info msg="container event discarded" container=0db7310232c3a21a1124eadc93ce2a43025466b7b35d007e1e6c0aab7287821e type=CONTAINER_STOPPED_EVENT Jan 20 02:41:39.083075 kubelet[2889]: E0120 02:41:39.073730 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:41.176660 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:47814.service - OpenSSH per-connection server daemon (10.0.0.1:47814). Jan 20 02:41:41.625277 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 47814 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:41.637692 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:41.699180 systemd-logind[1595]: New session 22 of user core. Jan 20 02:41:41.734221 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:41:42.539228 sshd[5121]: Connection closed by 10.0.0.1 port 47814 Jan 20 02:41:42.542214 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:42.576432 systemd-logind[1595]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:41:42.576635 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:47814.service: Deactivated successfully. Jan 20 02:41:42.602487 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:41:42.613013 systemd-logind[1595]: Removed session 22. Jan 20 02:41:47.633734 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:35734.service - OpenSSH per-connection server daemon (10.0.0.1:35734). Jan 20 02:41:48.020187 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 35734 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:48.036872 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:48.109044 systemd-logind[1595]: New session 23 of user core. Jan 20 02:41:48.135988 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:41:48.868096 sshd[5167]: Connection closed by 10.0.0.1 port 35734 Jan 20 02:41:48.865737 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:48.901227 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:35734.service: Deactivated successfully. Jan 20 02:41:48.911727 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:41:48.915696 systemd-logind[1595]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:41:48.927098 systemd-logind[1595]: Removed session 23. Jan 20 02:41:49.944586 containerd[1618]: time="2026-01-20T02:41:49.944374154Z" level=info msg="container event discarded" container=d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42 type=CONTAINER_CREATED_EVENT Jan 20 02:41:50.771089 containerd[1618]: time="2026-01-20T02:41:50.765895010Z" level=info msg="container event discarded" container=d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42 type=CONTAINER_STARTED_EVENT Jan 20 02:41:53.929349 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:35762.service - OpenSSH per-connection server daemon (10.0.0.1:35762). Jan 20 02:41:54.388212 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 35762 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:41:54.404672 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:41:54.460154 systemd-logind[1595]: New session 24 of user core. Jan 20 02:41:54.486250 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:41:55.367148 sshd[5218]: Connection closed by 10.0.0.1 port 35762 Jan 20 02:41:55.372217 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Jan 20 02:41:55.412503 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:35762.service: Deactivated successfully. Jan 20 02:41:55.431198 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:41:55.447108 systemd-logind[1595]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:41:55.477471 systemd-logind[1595]: Removed session 24. Jan 20 02:41:59.080251 kubelet[2889]: E0120 02:41:59.077473 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:59.354768 containerd[1618]: time="2026-01-20T02:41:59.351315052Z" level=info msg="container event discarded" container=d590d3fe1a57d5397bfd947c22de49531c72fd5b8a2540d09d22e14ceafacd42 type=CONTAINER_STOPPED_EVENT Jan 20 02:41:59.704466 containerd[1618]: time="2026-01-20T02:41:59.704301402Z" level=info msg="container event discarded" container=a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5 type=CONTAINER_CREATED_EVENT Jan 20 02:42:00.433671 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Jan 20 02:42:00.553205 containerd[1618]: time="2026-01-20T02:42:00.550388731Z" level=info msg="container event discarded" container=a73bf7e2fe749760082f8b0eab2980425a9a328cee151649496e058162841ce5 type=CONTAINER_STARTED_EVENT Jan 20 02:42:00.843304 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:00.857260 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:00.921094 systemd-logind[1595]: New session 25 of user core. Jan 20 02:42:00.944250 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:42:01.082591 kubelet[2889]: E0120 02:42:01.073793 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:01.792395 sshd[5263]: Connection closed by 10.0.0.1 port 42826 Jan 20 02:42:01.792877 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:01.834184 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:42826.service: Deactivated successfully. Jan 20 02:42:01.853786 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:42:01.870791 systemd-logind[1595]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:42:01.888814 systemd-logind[1595]: Removed session 25. Jan 20 02:42:06.082493 kubelet[2889]: E0120 02:42:06.077552 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:06.873202 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). Jan 20 02:42:07.230714 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:07.253240 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:07.296739 systemd-logind[1595]: New session 26 of user core. Jan 20 02:42:07.319423 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:42:07.902110 sshd[5301]: Connection closed by 10.0.0.1 port 58656 Jan 20 02:42:07.905585 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:07.943729 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:58656.service: Deactivated successfully. Jan 20 02:42:07.947893 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:42:07.950928 systemd-logind[1595]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:42:07.960117 systemd-logind[1595]: Removed session 26. Jan 20 02:42:10.596239 kubelet[2889]: E0120 02:42:10.595767 2889 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.476s" Jan 20 02:42:12.992335 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:58730.service - OpenSSH per-connection server daemon (10.0.0.1:58730). Jan 20 02:42:13.515280 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 58730 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:13.531645 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:13.608376 systemd-logind[1595]: New session 27 of user core. Jan 20 02:42:13.638300 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:42:14.615934 containerd[1618]: time="2026-01-20T02:42:14.615514706Z" level=info msg="container event discarded" container=5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5 type=CONTAINER_CREATED_EVENT Jan 20 02:42:14.624358 containerd[1618]: time="2026-01-20T02:42:14.616564190Z" level=info msg="container event discarded" container=5ddc5a4c553748145bb9a65c34a83aa4cae1decf8b5c68d29a294fe606058cc5 type=CONTAINER_STARTED_EVENT Jan 20 02:42:14.917097 sshd[5348]: Connection closed by 10.0.0.1 port 58730 Jan 20 02:42:14.913998 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:14.993724 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:58730.service: Deactivated successfully. Jan 20 02:42:15.008749 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:42:15.029565 systemd-logind[1595]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:42:15.096217 systemd[1]: Started sshd@26-10.0.0.112:22-10.0.0.1:35386.service - OpenSSH per-connection server daemon (10.0.0.1:35386). Jan 20 02:42:15.118408 systemd-logind[1595]: Removed session 27. Jan 20 02:42:15.142212 containerd[1618]: time="2026-01-20T02:42:15.138617444Z" level=info msg="container event discarded" container=1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48 type=CONTAINER_CREATED_EVENT Jan 20 02:42:15.678007 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 35386 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:15.677556 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:15.738742 systemd-logind[1595]: New session 28 of user core. Jan 20 02:42:15.763732 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:42:16.542564 containerd[1618]: time="2026-01-20T02:42:16.542136519Z" level=info msg="container event discarded" container=1819296fcbfc67c13f8a6dd5414ecd5a5f25800c5dc3147a63b258bedc53dc48 type=CONTAINER_STARTED_EVENT Jan 20 02:42:16.645493 containerd[1618]: time="2026-01-20T02:42:16.645214415Z" level=info msg="container event discarded" container=b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907 type=CONTAINER_CREATED_EVENT Jan 20 02:42:16.645493 containerd[1618]: time="2026-01-20T02:42:16.645320091Z" level=info msg="container event discarded" container=b0d4c3d4f7e8258ef8ac625b9b22617747f0872d5ebabb1382f73efec5185907 type=CONTAINER_STARTED_EVENT Jan 20 02:42:17.074542 containerd[1618]: time="2026-01-20T02:42:17.070794798Z" level=info msg="container event discarded" container=310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf type=CONTAINER_CREATED_EVENT Jan 20 02:42:17.087246 kubelet[2889]: E0120 02:42:17.086280 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:17.647880 sshd[5366]: Connection closed by 10.0.0.1 port 35386 Jan 20 02:42:17.648679 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:17.715563 systemd[1]: Started sshd@27-10.0.0.112:22-10.0.0.1:35404.service - OpenSSH per-connection server daemon (10.0.0.1:35404). Jan 20 02:42:17.740690 systemd[1]: sshd@26-10.0.0.112:22-10.0.0.1:35386.service: Deactivated successfully. Jan 20 02:42:17.830292 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:42:17.887077 systemd-logind[1595]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:42:17.909501 containerd[1618]: time="2026-01-20T02:42:17.908914994Z" level=info msg="container event discarded" container=310b3c814e510c0c9b0df74236a9bcd9ec4f8a29a3791656cd270019e4a56abf type=CONTAINER_STARTED_EVENT Jan 20 02:42:17.917423 systemd-logind[1595]: Removed session 28. Jan 20 02:42:18.246455 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 35404 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:18.260539 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:18.324620 systemd-logind[1595]: New session 29 of user core. Jan 20 02:42:18.341286 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:42:21.945138 sshd[5397]: Connection closed by 10.0.0.1 port 35404 Jan 20 02:42:21.951720 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:22.008689 systemd[1]: sshd@27-10.0.0.112:22-10.0.0.1:35404.service: Deactivated successfully. Jan 20 02:42:22.044345 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:42:22.096628 systemd-logind[1595]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:42:22.106172 systemd[1]: Started sshd@28-10.0.0.112:22-10.0.0.1:35440.service - OpenSSH per-connection server daemon (10.0.0.1:35440). Jan 20 02:42:22.121380 systemd-logind[1595]: Removed session 29. Jan 20 02:42:22.624045 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 35440 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:22.626392 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:22.668215 systemd-logind[1595]: New session 30 of user core. Jan 20 02:42:22.705691 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:42:23.084131 kubelet[2889]: E0120 02:42:23.082884 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:24.390088 sshd[5451]: Connection closed by 10.0.0.1 port 35440 Jan 20 02:42:24.391276 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:24.467681 systemd[1]: sshd@28-10.0.0.112:22-10.0.0.1:35440.service: Deactivated successfully. Jan 20 02:42:24.475711 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:42:24.516326 systemd-logind[1595]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:42:24.530129 systemd[1]: Started sshd@29-10.0.0.112:22-10.0.0.1:57936.service - OpenSSH per-connection server daemon (10.0.0.1:57936). Jan 20 02:42:24.552206 systemd-logind[1595]: Removed session 30. Jan 20 02:42:24.904912 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 57936 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:24.909285 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:24.984196 systemd-logind[1595]: New session 31 of user core. Jan 20 02:42:25.007597 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:42:25.521483 sshd[5477]: Connection closed by 10.0.0.1 port 57936 Jan 20 02:42:25.521148 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:25.559422 systemd[1]: sshd@29-10.0.0.112:22-10.0.0.1:57936.service: Deactivated successfully. Jan 20 02:42:25.562685 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:42:25.593732 systemd-logind[1595]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:42:25.604598 systemd-logind[1595]: Removed session 31. Jan 20 02:42:28.076080 kubelet[2889]: E0120 02:42:28.074912 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:30.576659 systemd[1]: Started sshd@30-10.0.0.112:22-10.0.0.1:57992.service - OpenSSH per-connection server daemon (10.0.0.1:57992). Jan 20 02:42:30.980562 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:31.003292 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:31.067370 systemd-logind[1595]: New session 32 of user core. Jan 20 02:42:31.087105 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:42:31.666810 sshd[5517]: Connection closed by 10.0.0.1 port 57992 Jan 20 02:42:31.668914 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:31.699477 systemd[1]: sshd@30-10.0.0.112:22-10.0.0.1:57992.service: Deactivated successfully. Jan 20 02:42:31.712896 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:42:31.728763 systemd-logind[1595]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:42:31.738730 systemd-logind[1595]: Removed session 32. Jan 20 02:42:36.770239 systemd[1]: Started sshd@31-10.0.0.112:22-10.0.0.1:41742.service - OpenSSH per-connection server daemon (10.0.0.1:41742). Jan 20 02:42:37.003901 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 41742 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:37.016418 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:37.065148 systemd-logind[1595]: New session 33 of user core. Jan 20 02:42:37.091117 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:42:37.600176 sshd[5556]: Connection closed by 10.0.0.1 port 41742 Jan 20 02:42:37.603607 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:37.621792 systemd[1]: sshd@31-10.0.0.112:22-10.0.0.1:41742.service: Deactivated successfully. Jan 20 02:42:37.631358 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:42:37.655457 systemd-logind[1595]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:42:37.659550 systemd-logind[1595]: Removed session 33. Jan 20 02:42:42.645973 systemd[1]: Started sshd@32-10.0.0.112:22-10.0.0.1:41786.service - OpenSSH per-connection server daemon (10.0.0.1:41786). Jan 20 02:42:42.982813 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 41786 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:42.996046 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:43.066765 systemd-logind[1595]: New session 34 of user core. Jan 20 02:42:43.083633 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:42:43.701780 sshd[5594]: Connection closed by 10.0.0.1 port 41786 Jan 20 02:42:43.701381 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:43.723681 systemd[1]: sshd@32-10.0.0.112:22-10.0.0.1:41786.service: Deactivated successfully. Jan 20 02:42:43.738362 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:42:43.755272 systemd-logind[1595]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:42:43.769413 systemd-logind[1595]: Removed session 34. Jan 20 02:42:48.820932 systemd[1]: Started sshd@33-10.0.0.112:22-10.0.0.1:44664.service - OpenSSH per-connection server daemon (10.0.0.1:44664). Jan 20 02:42:49.424255 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 44664 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:49.426059 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:49.528037 systemd-logind[1595]: New session 35 of user core. Jan 20 02:42:49.602067 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:42:50.299269 sshd[5653]: Connection closed by 10.0.0.1 port 44664 Jan 20 02:42:50.300975 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:50.334212 systemd[1]: sshd@33-10.0.0.112:22-10.0.0.1:44664.service: Deactivated successfully. Jan 20 02:42:50.361492 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:42:50.377936 systemd-logind[1595]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:42:50.389378 systemd-logind[1595]: Removed session 35. Jan 20 02:42:55.400548 systemd[1]: Started sshd@34-10.0.0.112:22-10.0.0.1:56666.service - OpenSSH per-connection server daemon (10.0.0.1:56666). Jan 20 02:42:55.739334 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 56666 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:42:55.768522 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:42:55.826861 systemd-logind[1595]: New session 36 of user core. Jan 20 02:42:55.836615 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:42:56.422179 sshd[5693]: Connection closed by 10.0.0.1 port 56666 Jan 20 02:42:56.424175 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Jan 20 02:42:56.448543 systemd[1]: sshd@34-10.0.0.112:22-10.0.0.1:56666.service: Deactivated successfully. Jan 20 02:42:56.477195 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:42:56.490423 systemd-logind[1595]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:42:56.504991 systemd-logind[1595]: Removed session 36. Jan 20 02:43:01.574768 systemd[1]: Started sshd@35-10.0.0.112:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Jan 20 02:43:01.925743 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:01.934683 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:01.977677 systemd-logind[1595]: New session 37 of user core. Jan 20 02:43:02.003179 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:43:02.392294 sshd[5731]: Connection closed by 10.0.0.1 port 56698 Jan 20 02:43:02.391916 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:02.416343 systemd[1]: sshd@35-10.0.0.112:22-10.0.0.1:56698.service: Deactivated successfully. Jan 20 02:43:02.419869 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:43:02.438872 systemd-logind[1595]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:43:02.447106 systemd-logind[1595]: Removed session 37. Jan 20 02:43:07.467901 systemd[1]: Started sshd@36-10.0.0.112:22-10.0.0.1:41296.service - OpenSSH per-connection server daemon (10.0.0.1:41296). Jan 20 02:43:07.766579 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 41296 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:07.782547 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:07.827933 systemd-logind[1595]: New session 38 of user core. Jan 20 02:43:07.864769 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:43:08.074552 kubelet[2889]: E0120 02:43:08.071741 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:08.221439 sshd[5769]: Connection closed by 10.0.0.1 port 41296 Jan 20 02:43:08.227788 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:08.252185 systemd[1]: sshd@36-10.0.0.112:22-10.0.0.1:41296.service: Deactivated successfully. Jan 20 02:43:08.260769 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:43:08.288993 systemd-logind[1595]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:43:08.291711 systemd-logind[1595]: Removed session 38. Jan 20 02:43:09.077188 kubelet[2889]: E0120 02:43:09.074372 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:11.081885 kubelet[2889]: E0120 02:43:11.081578 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:13.278430 systemd[1]: Started sshd@37-10.0.0.112:22-10.0.0.1:41390.service - OpenSSH per-connection server daemon (10.0.0.1:41390). Jan 20 02:43:13.544139 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 41390 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:13.559799 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:13.580013 systemd-logind[1595]: New session 39 of user core. Jan 20 02:43:13.589280 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:43:14.157472 sshd[5809]: Connection closed by 10.0.0.1 port 41390 Jan 20 02:43:14.169385 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:14.203184 systemd[1]: sshd@37-10.0.0.112:22-10.0.0.1:41390.service: Deactivated successfully. Jan 20 02:43:14.213289 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:43:14.231887 systemd-logind[1595]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:43:14.243225 systemd-logind[1595]: Removed session 39. Jan 20 02:43:19.239248 systemd[1]: Started sshd@38-10.0.0.112:22-10.0.0.1:57784.service - OpenSSH per-connection server daemon (10.0.0.1:57784). Jan 20 02:43:19.702702 sshd[5846]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:19.718549 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:19.744879 systemd-logind[1595]: New session 40 of user core. Jan 20 02:43:19.768199 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:43:20.249723 sshd[5850]: Connection closed by 10.0.0.1 port 57784 Jan 20 02:43:20.253245 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:20.326465 systemd-logind[1595]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:43:20.339324 systemd[1]: sshd@38-10.0.0.112:22-10.0.0.1:57784.service: Deactivated successfully. Jan 20 02:43:20.357324 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:43:20.366416 systemd-logind[1595]: Removed session 40. Jan 20 02:43:20.490933 containerd[1618]: time="2026-01-20T02:43:20.490536024Z" level=info msg="container event discarded" container=d39e39c7ee286dee8b916f7fa755e9314ad592de80b2590c56f94714ec5e898f type=CONTAINER_STOPPED_EVENT Jan 20 02:43:20.951745 containerd[1618]: time="2026-01-20T02:43:20.951654022Z" level=info msg="container event discarded" container=3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6 type=CONTAINER_CREATED_EVENT Jan 20 02:43:21.760572 containerd[1618]: time="2026-01-20T02:43:21.760431869Z" level=info msg="container event discarded" container=3b54d036f58a36175152d9bdb4cf97e9d5ddf26dfc12566875b6d404a066dcf6 type=CONTAINER_STARTED_EVENT Jan 20 02:43:23.094108 kubelet[2889]: E0120 02:43:23.093945 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:25.337191 systemd[1]: Started sshd@39-10.0.0.112:22-10.0.0.1:33074.service - OpenSSH per-connection server daemon (10.0.0.1:33074). Jan 20 02:43:25.720892 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 33074 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:25.730690 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:25.772584 systemd-logind[1595]: New session 41 of user core. Jan 20 02:43:25.832780 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:43:26.322777 sshd[5911]: Connection closed by 10.0.0.1 port 33074 Jan 20 02:43:26.321231 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:26.337145 systemd-logind[1595]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:43:26.347944 systemd[1]: sshd@39-10.0.0.112:22-10.0.0.1:33074.service: Deactivated successfully. Jan 20 02:43:26.358937 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:43:26.380510 systemd-logind[1595]: Removed session 41. Jan 20 02:43:29.078457 kubelet[2889]: E0120 02:43:29.076295 2889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:31.374605 systemd[1]: Started sshd@40-10.0.0.112:22-10.0.0.1:33116.service - OpenSSH per-connection server daemon (10.0.0.1:33116). Jan 20 02:43:31.697891 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 33116 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:31.706182 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:31.737373 systemd-logind[1595]: New session 42 of user core. Jan 20 02:43:31.775739 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:43:32.502565 sshd[5949]: Connection closed by 10.0.0.1 port 33116 Jan 20 02:43:32.505157 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:32.542907 systemd[1]: sshd@40-10.0.0.112:22-10.0.0.1:33116.service: Deactivated successfully. Jan 20 02:43:32.564730 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:43:32.585784 systemd-logind[1595]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:43:32.602564 systemd-logind[1595]: Removed session 42. Jan 20 02:43:37.581163 systemd[1]: Started sshd@41-10.0.0.112:22-10.0.0.1:59758.service - OpenSSH per-connection server daemon (10.0.0.1:59758). Jan 20 02:43:37.947394 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 59758 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:43:37.962961 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:38.020484 systemd-logind[1595]: New session 43 of user core. Jan 20 02:43:38.044937 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:43:38.639780 sshd[5987]: Connection closed by 10.0.0.1 port 59758 Jan 20 02:43:38.643185 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:38.673649 systemd[1]: sshd@41-10.0.0.112:22-10.0.0.1:59758.service: Deactivated successfully. Jan 20 02:43:38.687564 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:43:38.694344 systemd-logind[1595]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:43:38.704584 systemd-logind[1595]: Removed session 43.