Jan 20 02:36:05.778490 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:27:27 -00 2026 Jan 20 02:36:05.778545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:36:05.778566 kernel: BIOS-provided physical RAM map: Jan 20 02:36:05.778576 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 02:36:05.778584 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 02:36:05.778592 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 02:36:05.778602 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 02:36:05.778614 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 02:36:05.778707 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 02:36:05.778721 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 02:36:05.778738 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:36:05.778747 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 02:36:05.778755 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:36:05.778764 kernel: NX (Execute Disable) protection: active Jan 20 02:36:05.778870 kernel: APIC: Static calls initialized Jan 20 02:36:05.778887 kernel: SMBIOS 2.8 present. Jan 20 02:36:05.778974 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 02:36:05.778985 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:36:05.787965 kernel: Hypervisor detected: KVM Jan 20 02:36:05.787982 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:36:05.787992 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:36:05.788001 kernel: kvm-clock: using sched offset of 40087616314 cycles Jan 20 02:36:05.788012 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:36:05.788023 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:36:05.788048 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:36:05.788061 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:36:05.788071 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:36:05.788081 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 02:36:05.788091 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:36:05.788101 kernel: Using GB pages for direct mapping Jan 20 02:36:05.788113 kernel: ACPI: Early table checksum verification disabled Jan 20 02:36:05.788130 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 02:36:05.788142 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788153 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788163 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788172 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 02:36:05.788183 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788424 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788449 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788461 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:36:05.788476 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 02:36:05.788487 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 02:36:05.788497 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 02:36:05.788513 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 02:36:05.788525 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 02:36:05.788538 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 02:36:05.788550 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 02:36:05.788560 kernel: No NUMA configuration found Jan 20 02:36:05.788570 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 02:36:05.788585 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 02:36:05.788596 kernel: Zone ranges: Jan 20 02:36:05.788609 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:36:05.788622 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 02:36:05.788634 kernel: Normal empty Jan 20 02:36:05.788646 kernel: Device empty Jan 20 02:36:05.788659 kernel: Movable zone start for each node Jan 20 02:36:05.788669 kernel: Early memory node ranges Jan 20 02:36:05.788684 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 02:36:05.788694 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 02:36:05.788704 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 02:36:05.788718 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:36:05.788730 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 02:36:05.800410 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 02:36:05.800449 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:36:05.800475 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:36:05.800487 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:36:05.800499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:36:05.800589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:36:05.800603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:36:05.800615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:36:05.800626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:36:05.800643 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:36:05.800654 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:36:05.800665 kernel: TSC deadline timer available Jan 20 02:36:05.800675 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:36:05.800686 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:36:05.800697 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:36:05.800708 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:36:05.800720 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:36:05.804988 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:36:05.805005 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:36:05.805016 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:36:05.805026 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:36:05.805037 kernel: kvm-guest: setup PV sched yield Jan 20 02:36:05.805047 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 02:36:05.805057 kernel: Booting paravirtualized kernel on KVM Jan 20 02:36:05.805075 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:36:05.805086 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:36:05.805098 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:36:05.805109 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:36:05.805120 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:36:05.805131 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:36:05.805145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:36:05.805166 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:36:05.805179 kernel: random: crng init done Jan 20 02:36:05.805191 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:36:05.805359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:36:05.805372 kernel: Fallback order for Node 0: 0 Jan 20 02:36:05.805384 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 02:36:05.805396 kernel: Policy zone: DMA32 Jan 20 02:36:05.805413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:36:05.805425 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:36:05.805437 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:36:05.805450 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:36:05.805461 kernel: Dynamic Preempt: voluntary Jan 20 02:36:05.805474 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:36:05.805487 kernel: rcu: RCU event tracing is enabled. Jan 20 02:36:05.805502 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:36:05.805512 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:36:05.805607 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:36:05.805620 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:36:05.805630 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:36:05.805641 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:36:05.805651 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:36:05.805666 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:36:05.805678 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:36:05.805692 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:36:05.805703 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:36:05.805724 kernel: Console: colour VGA+ 80x25 Jan 20 02:36:05.805737 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:36:05.805748 kernel: ACPI: Core revision 20240827 Jan 20 02:36:05.805759 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:36:05.805860 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:36:05.805877 kernel: x2apic enabled Jan 20 02:36:05.805890 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:36:05.805972 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:36:05.805986 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:36:05.806004 kernel: kvm-guest: setup PV IPIs Jan 20 02:36:05.806017 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:36:05.806028 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:36:05.806039 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:36:05.806049 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:36:05.806060 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:36:05.806071 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:36:05.806085 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:36:05.806097 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:36:05.806111 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:36:05.806123 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:36:05.806134 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:36:05.806146 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:36:05.806157 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:36:05.806171 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:36:05.806182 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:36:05.806351 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:36:05.806368 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:36:05.806383 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:36:05.806396 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:36:05.806409 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:36:05.806426 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:36:05.806437 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:36:05.806448 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:36:05.806459 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:36:05.806472 kernel: landlock: Up and running. Jan 20 02:36:05.806485 kernel: SELinux: Initializing. Jan 20 02:36:05.806499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:36:05.806515 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:36:05.806610 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:36:05.806623 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:36:05.806634 kernel: signal: max sigframe size: 1776 Jan 20 02:36:05.806647 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:36:05.806661 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:36:05.806672 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:36:05.806688 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:36:05.806699 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:36:05.806711 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:36:05.806725 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:36:05.806736 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:36:05.806746 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:36:05.806758 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120524K reserved, 0K cma-reserved) Jan 20 02:36:05.817705 kernel: devtmpfs: initialized Jan 20 02:36:05.817731 kernel: x86/mm: Memory block size: 128MB Jan 20 02:36:05.817746 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:36:05.817760 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:36:05.817852 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:36:05.817869 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:36:05.817882 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:36:05.817904 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:36:05.817915 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:36:05.817926 kernel: audit: type=2000 audit(1768876495.587:1): state=initialized audit_enabled=0 res=1 Jan 20 02:36:05.817937 kernel: cpuidle: using governor menu Jan 20 02:36:05.817947 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:36:05.817958 kernel: dca service started, version 1.12.1 Jan 20 02:36:05.817972 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 02:36:05.817989 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 02:36:05.818000 kernel: PCI: Using configuration type 1 for base access Jan 20 02:36:05.818011 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:36:05.818022 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:36:05.818032 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:36:05.818043 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:36:05.818053 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:36:05.818069 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:36:05.818082 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:36:05.818096 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:36:05.818107 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:36:05.818117 kernel: ACPI: Interpreter enabled Jan 20 02:36:05.818128 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:36:05.818139 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:36:05.818150 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:36:05.818166 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:36:05.818177 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:36:05.818190 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:36:05.840165 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:36:05.840675 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:36:05.841088 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:36:05.841114 kernel: PCI host bridge to bus 0000:00 Jan 20 02:36:05.841727 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:36:05.853677 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:36:05.854078 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:36:05.857115 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 02:36:05.857611 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 02:36:05.859964 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 02:36:05.862374 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:36:05.862766 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:36:05.863351 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:36:05.863621 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 02:36:05.866111 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 02:36:05.866554 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 02:36:05.867907 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:36:05.868156 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 17578 usecs Jan 20 02:36:05.868577 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:36:05.870056 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 02:36:05.870485 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 02:36:05.870745 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 02:36:05.871111 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:36:05.871525 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 02:36:05.874976 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 02:36:05.875550 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 02:36:05.876000 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:36:05.876415 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 02:36:05.876662 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 02:36:05.880092 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 02:36:05.880705 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 02:36:05.884645 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:36:05.885066 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:36:05.885530 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 16601 usecs Jan 20 02:36:05.885877 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:36:05.886115 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 02:36:05.886523 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 02:36:05.906402 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:36:05.906729 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 02:36:05.906759 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:36:05.906864 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:36:05.906880 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:36:05.906906 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:36:05.906917 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:36:05.906928 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:36:05.906939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:36:05.906950 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:36:05.914659 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:36:05.914687 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:36:05.914713 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:36:05.914728 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:36:05.914740 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:36:05.914751 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:36:05.914762 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:36:05.914866 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:36:05.914878 kernel: iommu: Default domain type: Translated Jan 20 02:36:05.914889 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:36:05.914906 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:36:05.914917 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:36:05.914928 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 02:36:05.914939 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 02:36:05.919608 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:36:05.928000 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:36:05.932166 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:36:05.932350 kernel: vgaarb: loaded Jan 20 02:36:05.932367 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:36:05.932379 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:36:05.932390 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:36:05.932402 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:36:05.932417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:36:05.932428 kernel: pnp: PnP ACPI init Jan 20 02:36:05.933075 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 02:36:05.933098 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:36:05.933111 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:36:05.933124 kernel: NET: Registered PF_INET protocol family Jan 20 02:36:05.933136 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:36:05.933148 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:36:05.933167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:36:05.933179 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:36:05.933191 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:36:05.933363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:36:05.933375 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:36:05.933387 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:36:05.933399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:36:05.933416 kernel: NET: Registered PF_XDP protocol family Jan 20 02:36:05.933657 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:36:05.957423 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:36:05.957873 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:36:05.958124 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 02:36:05.958507 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 02:36:05.966744 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 02:36:05.966972 kernel: hrtimer: interrupt took 6098974 ns Jan 20 02:36:05.966990 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:36:05.967002 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:36:05.967015 kernel: Initialise system trusted keyrings Jan 20 02:36:05.967027 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:36:05.967040 kernel: Key type asymmetric registered Jan 20 02:36:05.967051 kernel: Asymmetric key parser 'x509' registered Jan 20 02:36:05.967071 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:36:05.967087 kernel: io scheduler mq-deadline registered Jan 20 02:36:05.967098 kernel: io scheduler kyber registered Jan 20 02:36:05.967109 kernel: io scheduler bfq registered Jan 20 02:36:05.967120 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:36:05.967132 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:36:05.967143 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:36:05.967159 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:36:05.967172 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:36:05.967186 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:36:05.975175 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:36:05.977975 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:36:05.977995 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:36:05.985985 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:36:05.986038 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:36:05.986480 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:36:05.986720 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:35:26 UTC (1768876526) Jan 20 02:36:06.001360 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 02:36:06.001408 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:36:06.001421 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:36:06.001450 kernel: Segment Routing with IPv6 Jan 20 02:36:06.001461 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:36:06.001472 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:36:06.001483 kernel: Key type dns_resolver registered Jan 20 02:36:06.001494 kernel: IPI shorthand broadcast: enabled Jan 20 02:36:06.001505 kernel: sched_clock: Marking stable (24839057616, 2567044204)->(32699371362, -5293269542) Jan 20 02:36:06.001516 kernel: registered taskstats version 1 Jan 20 02:36:06.001530 kernel: Loading compiled-in X.509 certificates Jan 20 02:36:06.001548 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 39f154fc6e329874bced8cdae9473f98b7dd3f43' Jan 20 02:36:06.001559 kernel: Demotion targets for Node 0: null Jan 20 02:36:06.001570 kernel: Key type .fscrypt registered Jan 20 02:36:06.001580 kernel: Key type fscrypt-provisioning registered Jan 20 02:36:06.001591 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:36:06.001601 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:36:06.001617 kernel: ima: No architecture policies found Jan 20 02:36:06.001631 kernel: clk: Disabling unused clocks Jan 20 02:36:06.001642 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 20 02:36:06.001652 kernel: Write protecting the kernel read-only data: 47104k Jan 20 02:36:06.001663 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 20 02:36:06.001674 kernel: Run /init as init process Jan 20 02:36:06.001685 kernel: with arguments: Jan 20 02:36:06.001695 kernel: /init Jan 20 02:36:06.001712 kernel: with environment: Jan 20 02:36:06.001724 kernel: HOME=/ Jan 20 02:36:06.001737 kernel: TERM=linux Jan 20 02:36:06.001748 kernel: SCSI subsystem initialized Jan 20 02:36:06.001758 kernel: libata version 3.00 loaded. Jan 20 02:36:06.015079 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:36:06.015130 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:36:06.015612 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:36:06.025730 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:36:06.026154 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:36:06.035728 kernel: scsi host0: ahci Jan 20 02:36:06.036454 kernel: scsi host1: ahci Jan 20 02:36:06.042654 kernel: scsi host2: ahci Jan 20 02:36:06.043406 kernel: scsi host3: ahci Jan 20 02:36:06.057025 kernel: scsi host4: ahci Jan 20 02:36:06.060349 kernel: scsi host5: ahci Jan 20 02:36:06.060389 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 20 02:36:06.060403 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 20 02:36:06.060430 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 20 02:36:06.060441 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 20 02:36:06.060453 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 20 02:36:06.060464 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 20 02:36:06.060484 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:36:06.060496 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:36:06.060507 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:36:06.060522 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:36:06.060533 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:36:06.060544 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:36:06.060555 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:36:06.060566 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:36:06.060581 kernel: ata3.00: applying bridge limits Jan 20 02:36:06.063028 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:36:06.063056 kernel: ata3.00: configured for UDMA/100 Jan 20 02:36:06.063690 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:36:06.071086 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:36:06.071572 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:36:06.071600 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:36:06.086468 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 02:36:06.086537 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:36:06.088592 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:36:06.088626 kernel: GPT:16515071 != 27000831 Jan 20 02:36:06.088640 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:36:06.088651 kernel: GPT:16515071 != 27000831 Jan 20 02:36:06.088661 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:36:06.088682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:36:06.088697 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:36:06.088712 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:36:06.088725 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:36:06.088736 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 02:36:06.088748 kernel: raid6: avx2x4 gen() 6804 MB/s Jan 20 02:36:06.088759 kernel: raid6: avx2x2 gen() 5712 MB/s Jan 20 02:36:06.088860 kernel: raid6: avx2x1 gen() 3376 MB/s Jan 20 02:36:06.088880 kernel: raid6: using algorithm avx2x4 gen() 6804 MB/s Jan 20 02:36:06.088892 kernel: raid6: .... xor() 761 MB/s, rmw enabled Jan 20 02:36:06.088904 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:36:06.088920 kernel: xor: automatically using best checksumming function avx Jan 20 02:36:06.088938 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:36:06.088949 kernel: BTRFS: device fsid 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (180) Jan 20 02:36:06.088961 kernel: BTRFS info (device dm-0): first mount of filesystem 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 Jan 20 02:36:06.088972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:36:06.088983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:36:06.088995 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:36:06.089009 kernel: loop: module loaded Jan 20 02:36:06.089027 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 02:36:06.089039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:36:06.089051 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:36:06.089066 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:36:06.089079 systemd[1]: Detected virtualization kvm. Jan 20 02:36:06.089091 systemd[1]: Detected architecture x86-64. Jan 20 02:36:06.089107 systemd[1]: Running in initrd. Jan 20 02:36:06.089121 systemd[1]: No hostname configured, using default hostname. Jan 20 02:36:06.089136 systemd[1]: Hostname set to . Jan 20 02:36:06.089148 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:36:06.089160 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:36:06.089172 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:36:06.089188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:36:06.089364 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:36:06.089381 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:36:06.089394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:36:06.089406 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:36:06.089419 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:36:06.089436 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:36:06.089448 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:36:06.089460 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:36:06.089474 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:36:06.089489 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:36:06.089501 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:36:06.089513 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:36:06.089529 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:36:06.089541 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:36:06.089553 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:36:06.089565 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:36:06.089577 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:36:06.089590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:36:06.089602 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:36:06.089622 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:36:06.089634 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:36:06.089646 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:36:06.089658 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:36:06.089670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:36:06.089682 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:36:06.089699 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:36:06.089713 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:36:06.089727 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:36:06.089739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:36:06.089752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:36:06.089768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:36:06.099149 systemd-journald[320]: Collecting audit messages is enabled. Jan 20 02:36:06.099362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:36:06.099382 kernel: audit: type=1130 audit(1768876565.716:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.099397 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:36:06.099410 kernel: audit: type=1130 audit(1768876565.831:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.099423 systemd-journald[320]: Journal started Jan 20 02:36:06.099459 systemd-journald[320]: Runtime Journal (/run/log/journal/b1a04432d6834c4597750e169fd2bb58) is 6M, max 48.2M, 42.1M free. Jan 20 02:36:05.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:05.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.169736 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:36:06.169907 kernel: audit: type=1130 audit(1768876566.123:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.467648 kernel: audit: type=1130 audit(1768876566.318:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:06.498648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:36:06.526585 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:36:07.116997 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:36:07.207948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:36:10.816908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:36:10.816989 kernel: Bridge firewalling registered Jan 20 02:36:10.817008 kernel: audit: type=1130 audit(1768876570.334:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:10.817027 kernel: audit: type=1130 audit(1768876570.543:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:10.817044 kernel: audit: type=1130 audit(1768876570.801:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:10.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:10.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:10.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:07.888104 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 02:36:10.355962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:36:11.430336 kernel: audit: type=1130 audit(1768876571.120:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:11.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:10.679565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:36:11.011796 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:36:11.245379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:36:11.924796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:36:12.272782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:36:13.041494 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:36:13.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:13.383025 kernel: audit: type=1130 audit(1768876573.041:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:13.339823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:36:13.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:13.667156 kernel: audit: type=1130 audit(1768876573.535:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:13.779656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:36:13.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:14.049520 kernel: audit: type=1130 audit(1768876573.853:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:14.198533 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:36:14.322000 audit: BPF prog-id=6 op=LOAD Jan 20 02:36:14.354558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:36:14.377269 kernel: audit: type=1334 audit(1768876574.322:13): prog-id=6 op=LOAD Jan 20 02:36:15.490592 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:36:15.978177 systemd-resolved[358]: Positive Trust Anchors: Jan 20 02:36:15.980044 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:36:15.989070 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:36:15.989127 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:36:16.340497 systemd-resolved[358]: Defaulting to hostname 'linux'. Jan 20 02:36:16.382740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:36:16.583652 kernel: audit: type=1130 audit(1768876576.399:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:16.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:16.400604 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:36:17.181924 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:36:17.323036 kernel: iscsi: registered transport (tcp) Jan 20 02:36:17.622773 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:36:17.622911 kernel: QLogic iSCSI HBA Driver Jan 20 02:36:18.050678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:36:18.205836 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:36:18.313821 kernel: audit: type=1130 audit(1768876578.247:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:18.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:18.260928 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:36:18.620814 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:36:18.712645 kernel: audit: type=1130 audit(1768876578.651:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:18.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:18.672427 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:36:18.755528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:36:19.109731 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:36:19.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:19.217691 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:36:19.391567 kernel: audit: type=1130 audit(1768876579.180:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:19.391612 kernel: audit: type=1334 audit(1768876579.198:18): prog-id=7 op=LOAD Jan 20 02:36:19.391629 kernel: audit: type=1334 audit(1768876579.198:19): prog-id=8 op=LOAD Jan 20 02:36:19.198000 audit: BPF prog-id=7 op=LOAD Jan 20 02:36:19.198000 audit: BPF prog-id=8 op=LOAD Jan 20 02:36:19.547540 systemd-udevd[603]: Using default interface naming scheme 'v257'. Jan 20 02:36:19.740643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:36:19.831471 kernel: audit: type=1130 audit(1768876579.758:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:19.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:19.785554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:36:19.960454 dracut-pre-trigger[667]: rd.md=0: removing MD RAID activation Jan 20 02:36:20.000676 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:36:20.084495 kernel: audit: type=1130 audit(1768876580.016:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:20.084543 kernel: audit: type=1334 audit(1768876580.034:22): prog-id=9 op=LOAD Jan 20 02:36:20.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:20.034000 audit: BPF prog-id=9 op=LOAD Jan 20 02:36:20.049448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:36:20.198968 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:36:20.284718 kernel: audit: type=1130 audit(1768876580.208:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:20.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:20.260069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:36:20.395026 systemd-networkd[716]: lo: Link UP Jan 20 02:36:20.396983 systemd-networkd[716]: lo: Gained carrier Jan 20 02:36:20.407548 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:36:20.477613 systemd[1]: Reached target network.target - Network. Jan 20 02:36:20.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:21.098805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:36:21.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:21.182595 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:36:21.730048 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:36:22.121908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:36:22.304788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:36:22.542866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:36:22.638147 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:36:22.689697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:36:22.689917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:36:22.752069 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:36:22.832569 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:36:22.832623 kernel: audit: type=1131 audit(1768876582.751:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:22.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:22.854502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:36:23.024685 disk-uuid[774]: Primary Header is updated. Jan 20 02:36:23.024685 disk-uuid[774]: Secondary Entries is updated. Jan 20 02:36:23.024685 disk-uuid[774]: Secondary Header is updated. Jan 20 02:36:23.177170 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:36:23.489900 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:36:25.043254 kernel: AES CTR mode by8 optimization enabled Jan 20 02:36:25.043309 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 02:36:25.043419 disk-uuid[775]: Warning: The kernel is still using the old partition table. Jan 20 02:36:25.043419 disk-uuid[775]: The new table will be used at the next reboot or after you Jan 20 02:36:25.043419 disk-uuid[775]: run partprobe(8) or kpartx(8) Jan 20 02:36:25.043419 disk-uuid[775]: The operation has completed successfully. Jan 20 02:36:25.508465 kernel: audit: type=1130 audit(1768876585.081:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.508519 kernel: audit: type=1131 audit(1768876585.081:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.508540 kernel: audit: type=1130 audit(1768876585.239:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.508561 kernel: audit: type=1130 audit(1768876585.390:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:23.489915 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:36:23.517192 systemd-networkd[716]: eth0: Link UP Jan 20 02:36:23.543166 systemd-networkd[716]: eth0: Gained carrier Jan 20 02:36:23.543189 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:36:23.660090 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:36:24.688464 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:36:24.691663 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:36:25.087864 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:36:25.342936 systemd-networkd[716]: eth0: Gained IPv6LL Jan 20 02:36:25.344762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:36:25.430761 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:36:25.546856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:36:25.592136 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:36:25.643660 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:36:25.660561 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:36:26.044931 kernel: audit: type=1130 audit(1768876585.961:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.949680 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:36:26.188406 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Jan 20 02:36:26.233495 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:36:26.233594 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:36:26.303118 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:36:26.303316 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:36:26.387631 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:36:26.440714 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:36:26.562354 kernel: audit: type=1130 audit(1768876586.460:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:26.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:26.543729 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:36:27.557907 ignition[887]: Ignition 2.24.0 Jan 20 02:36:27.565721 ignition[887]: Stage: fetch-offline Jan 20 02:36:27.565821 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:36:27.565846 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:36:27.570791 ignition[887]: parsed url from cmdline: "" Jan 20 02:36:27.570801 ignition[887]: no config URL provided Jan 20 02:36:27.570815 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:36:27.570841 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:36:27.570919 ignition[887]: op(1): [started] loading QEMU firmware config module Jan 20 02:36:27.570927 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:36:27.790510 ignition[887]: op(1): [finished] loading QEMU firmware config module Jan 20 02:36:28.277513 ignition[887]: parsing config with SHA512: fbc55cae844781873e4b3426819a2d9fe715fcbcb4ddfb7e16e221f70acdcf9b323d9825b9c3567352cfe8cbcbab86c1ae4e5431857da34b27707a0099700dba Jan 20 02:36:28.361616 unknown[887]: fetched base config from "system" Jan 20 02:36:28.361628 unknown[887]: fetched user config from "qemu" Jan 20 02:36:28.423903 ignition[887]: fetch-offline: fetch-offline passed Jan 20 02:36:28.424092 ignition[887]: Ignition finished successfully Jan 20 02:36:28.472301 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:36:28.580496 kernel: audit: type=1130 audit(1768876588.501:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:28.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:28.507753 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:36:28.518464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:36:31.937348 ignition[898]: Ignition 2.24.0 Jan 20 02:36:31.937414 ignition[898]: Stage: kargs Jan 20 02:36:31.937747 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:36:31.937763 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:36:32.112901 ignition[898]: kargs: kargs passed Jan 20 02:36:32.336948 ignition[898]: Ignition finished successfully Jan 20 02:36:32.538558 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:36:32.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:32.716730 kernel: audit: type=1130 audit(1768876592.649:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:32.744822 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:36:33.484862 ignition[906]: Ignition 2.24.0 Jan 20 02:36:33.488312 ignition[906]: Stage: disks Jan 20 02:36:33.488691 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:36:33.488714 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:36:33.571928 ignition[906]: disks: disks passed Jan 20 02:36:33.578577 ignition[906]: Ignition finished successfully Jan 20 02:36:33.663753 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:36:33.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:33.786557 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:36:33.987597 kernel: audit: type=1130 audit(1768876593.776:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:33.861159 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:36:33.861421 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:36:33.861472 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:36:33.861565 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:36:33.997846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:36:35.194622 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 02:36:35.356737 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:36:35.566521 kernel: audit: type=1130 audit(1768876595.416:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:35.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:35.433916 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:36:38.061830 kernel: EXT4-fs (vda9): mounted filesystem 452c2147-bc43-4f48-ad5f-dc139dd95c0b r/w with ordered data mode. Quota mode: none. Jan 20 02:36:38.081825 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:36:38.198357 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:36:38.302446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:36:38.795431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:36:38.875789 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:36:38.875858 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:36:38.875909 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:36:39.539709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:36:39.741686 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Jan 20 02:36:39.741747 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:36:39.741768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:36:39.759165 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:36:39.844463 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:36:39.845511 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:36:39.874981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:36:41.628699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:36:41.780036 kernel: audit: type=1130 audit(1768876601.662:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:41.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:41.685920 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:36:41.891934 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:36:42.034494 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:36:42.140297 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:36:42.437429 ignition[1020]: INFO : Ignition 2.24.0 Jan 20 02:36:42.437429 ignition[1020]: INFO : Stage: mount Jan 20 02:36:42.506823 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:36:42.506823 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:36:42.506823 ignition[1020]: INFO : mount: mount passed Jan 20 02:36:42.506823 ignition[1020]: INFO : Ignition finished successfully Jan 20 02:36:42.836914 kernel: audit: type=1130 audit(1768876602.629:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:42.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:42.512722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:36:42.734530 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:36:42.969696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:36:43.017035 kernel: audit: type=1130 audit(1768876602.985:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:42.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:43.034641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:36:43.168939 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1033) Jan 20 02:36:43.201361 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:36:43.201444 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:36:43.259814 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:36:43.259911 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:36:43.296675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:36:43.486356 ignition[1050]: INFO : Ignition 2.24.0 Jan 20 02:36:43.514321 ignition[1050]: INFO : Stage: files Jan 20 02:36:43.514321 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:36:43.514321 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:36:43.514321 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:36:43.716566 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:36:43.716566 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:36:43.716566 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:36:43.716566 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:36:43.716566 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:36:43.716566 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:36:43.716566 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 02:36:43.624525 unknown[1050]: wrote ssh authorized keys file for user: core Jan 20 02:36:44.272318 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:36:44.853587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:36:45.442774 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:36:45.442774 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:36:45.442774 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:36:45.442774 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:36:45.442774 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:36:45.442774 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 02:36:45.987397 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 02:36:48.370052 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:36:48.370052 ignition[1050]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 02:36:48.503661 ignition[1050]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:36:49.153774 ignition[1050]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:36:49.234694 ignition[1050]: INFO : files: files passed Jan 20 02:36:49.234694 ignition[1050]: INFO : Ignition finished successfully Jan 20 02:36:49.917478 kernel: audit: type=1130 audit(1768876609.774:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:49.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:49.472640 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:36:49.806515 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:36:50.053823 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:36:50.123729 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:36:50.123909 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:36:50.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:50.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:50.485421 kernel: audit: type=1130 audit(1768876610.298:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:50.485512 kernel: audit: type=1131 audit(1768876610.298:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:50.502700 initrd-setup-root-after-ignition[1081]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:36:50.571070 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:36:50.571070 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:36:50.621481 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:36:50.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:50.759708 kernel: audit: type=1130 audit(1768876610.654:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:50.630819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:36:50.759413 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:36:50.932955 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:36:55.170424 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:36:55.363318 kernel: audit: type=1130 audit(1768876615.223:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:55.363443 kernel: audit: type=1131 audit(1768876615.223:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:55.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:55.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:55.170767 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:36:55.234824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:36:55.396472 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:36:55.799032 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:36:55.938786 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:36:57.391416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:36:57.736866 kernel: audit: type=1130 audit(1768876617.472:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:57.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:57.745086 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:36:58.089653 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:36:58.229649 kernel: audit: type=1131 audit(1768876618.124:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:58.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:58.090132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:36:58.106451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:36:58.106687 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:36:58.106867 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:36:58.107150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:36:58.506788 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:36:58.581989 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:36:58.627671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:36:58.713908 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:36:58.797909 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:36:58.844836 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:36:58.922797 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:36:58.959079 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:36:58.978966 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:36:59.210711 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:36:59.259386 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:36:59.361685 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:36:59.529531 kernel: audit: type=1131 audit(1768876619.384:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:59.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:59.362004 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:36:59.384982 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:36:59.577806 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:36:59.660059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:36:59.666602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:36:59.852011 kernel: audit: type=1131 audit(1768876619.698:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:59.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:59.676989 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:36:59.677444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:36:59.854917 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:36:59.862104 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:36:59.956936 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:36:59.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:00.235428 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:37:00.374144 kernel: audit: type=1131 audit(1768876619.956:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:00.244486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:37:00.524562 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:37:00.568653 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:37:00.616850 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:37:00.617125 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:37:00.704561 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:37:00.713833 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:37:00.879014 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 02:37:00.879139 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:37:01.257742 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:37:01.260442 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:37:01.436606 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:37:01.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.436845 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:37:01.864475 kernel: audit: type=1131 audit(1768876621.436:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.864524 kernel: audit: type=1131 audit(1768876621.476:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.864543 kernel: audit: type=1131 audit(1768876621.569:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.497879 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:37:01.561600 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:37:01.562029 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:37:02.006774 ignition[1107]: INFO : Ignition 2.24.0 Jan 20 02:37:02.006774 ignition[1107]: INFO : Stage: umount Jan 20 02:37:02.006774 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:37:02.006774 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:37:02.006774 ignition[1107]: INFO : umount: umount passed Jan 20 02:37:02.006774 ignition[1107]: INFO : Ignition finished successfully Jan 20 02:37:02.412628 kernel: audit: type=1131 audit(1768876622.047:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.412690 kernel: audit: type=1131 audit(1768876622.047:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.412712 kernel: audit: type=1131 audit(1768876622.047:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.412732 kernel: audit: type=1131 audit(1768876622.293:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:01.911112 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:37:02.051329 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:37:02.051762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:37:02.864093 kernel: audit: type=1130 audit(1768876622.561:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.864159 kernel: audit: type=1131 audit(1768876622.561:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.052028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:37:02.052350 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:37:02.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:02.052568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:37:02.052734 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:37:02.066129 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:37:02.245088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:37:02.392904 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:37:02.393081 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:37:02.675540 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:37:02.705639 systemd[1]: Stopped target network.target - Network. Jan 20 02:37:02.875028 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:37:02.875161 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:37:02.890780 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:37:02.890888 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:37:02.891138 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:37:02.891388 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:37:02.891507 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:37:02.891572 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:37:02.891952 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:37:02.892067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:37:02.907683 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:37:02.907843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:37:03.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:03.342931 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:37:03.343646 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:37:03.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:03.489088 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:37:03.489761 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:37:03.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:03.630416 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:37:03.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:03.685000 audit: BPF prog-id=6 op=UNLOAD Jan 20 02:37:03.715000 audit: BPF prog-id=9 op=UNLOAD Jan 20 02:37:03.630800 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:37:03.696603 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:37:03.760064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:37:03.760150 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:37:03.852120 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:37:03.874544 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:37:03.874694 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:37:03.936499 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:37:03.936634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:37:04.127511 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:37:03.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:04.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:04.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:04.127758 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:37:04.181339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:37:04.374596 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:37:04.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:04.383691 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:37:04.540119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:37:04.543525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:37:04.679106 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:37:04.683659 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:37:04.865984 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:37:04.866189 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:37:04.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:04.947074 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:37:04.955487 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:37:05.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.068876 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:37:05.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.069084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:37:05.263936 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:37:05.637750 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 20 02:37:05.637804 kernel: audit: type=1131 audit(1768876625.397:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.637827 kernel: audit: type=1131 audit(1768876625.519:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.335105 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:37:05.902668 kernel: audit: type=1131 audit(1768876625.519:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.336878 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:37:05.400821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:37:05.401026 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:37:05.519756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:37:05.519884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:37:06.250944 kernel: audit: type=1131 audit(1768876626.159:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:06.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.533146 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:37:05.649805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:37:06.461715 kernel: audit: type=1130 audit(1768876626.326:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:06.461761 kernel: audit: type=1131 audit(1768876626.326:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:06.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:06.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:06.163681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:37:06.163854 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:37:06.338810 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:37:06.634885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:37:06.774440 systemd[1]: Switching root. Jan 20 02:37:07.019845 systemd-journald[320]: Journal stopped Jan 20 02:37:20.617156 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 20 02:37:20.617454 kernel: audit: type=1335 audit(1768876627.057:83): pid=320 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 20 02:37:20.617497 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:37:20.617515 kernel: SELinux: policy capability open_perms=1 Jan 20 02:37:20.617532 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:37:20.617558 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:37:20.617575 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:37:20.617590 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:37:20.617610 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:37:20.617627 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:37:20.617644 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:37:20.617661 kernel: audit: type=1403 audit(1768876628.395:84): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 02:37:20.617678 systemd[1]: Successfully loaded SELinux policy in 596.395ms. Jan 20 02:37:20.617714 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 28.286ms. Jan 20 02:37:20.617733 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:37:20.617754 systemd[1]: Detected virtualization kvm. Jan 20 02:37:20.617772 systemd[1]: Detected architecture x86-64. Jan 20 02:37:20.617788 systemd[1]: Detected first boot. Jan 20 02:37:20.617808 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:37:20.617824 kernel: audit: type=1334 audit(1768876628.975:85): prog-id=10 op=LOAD Jan 20 02:37:20.617840 kernel: audit: type=1334 audit(1768876628.976:86): prog-id=10 op=UNLOAD Jan 20 02:37:20.617856 zram_generator::config[1152]: No configuration found. Jan 20 02:37:20.617878 kernel: Guest personality initialized and is inactive Jan 20 02:37:20.617894 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:37:20.617910 kernel: Initialized host personality Jan 20 02:37:20.617926 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:37:20.617944 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:37:20.617963 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:37:20.617980 kernel: audit: type=1334 audit(1768876634.282:89): prog-id=12 op=LOAD Jan 20 02:37:20.618001 kernel: audit: type=1334 audit(1768876634.286:90): prog-id=3 op=UNLOAD Jan 20 02:37:20.618020 kernel: audit: type=1334 audit(1768876634.286:91): prog-id=13 op=LOAD Jan 20 02:37:20.618038 kernel: audit: type=1334 audit(1768876634.286:92): prog-id=14 op=LOAD Jan 20 02:37:20.618053 kernel: audit: type=1334 audit(1768876634.286:93): prog-id=4 op=UNLOAD Jan 20 02:37:20.618069 kernel: audit: type=1334 audit(1768876634.286:94): prog-id=5 op=UNLOAD Jan 20 02:37:20.618086 kernel: audit: type=1131 audit(1768876634.299:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:20.618103 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:37:20.618124 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:37:20.618141 kernel: audit: type=1130 audit(1768876634.565:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:20.618158 kernel: audit: type=1131 audit(1768876634.565:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:20.618176 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:37:20.618380 kernel: audit: type=1334 audit(1768876634.773:98): prog-id=12 op=UNLOAD Jan 20 02:37:20.618416 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:37:20.618438 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:37:20.618456 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:37:20.618473 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:37:20.618491 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:37:20.618510 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:37:20.618527 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:37:20.618551 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:37:20.618569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:37:20.618587 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:37:20.618605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:37:20.618625 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:37:20.618643 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:37:20.618659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:37:20.618681 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:37:20.618698 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:37:20.618715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:37:20.618732 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:37:20.618749 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:37:20.618768 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:37:20.618787 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:37:20.618811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:37:20.618833 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:37:20.618852 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 02:37:20.618869 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:37:20.618886 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:37:20.618903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:37:20.618921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:37:20.618943 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:37:20.618960 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:37:20.618978 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 02:37:20.618999 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:37:20.619016 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 02:37:20.619033 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 02:37:20.619049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:37:20.619070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:37:20.619087 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:37:20.619104 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:37:20.619121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:37:20.619138 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:37:20.619157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:37:20.619180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:37:20.619395 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:37:20.619416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:37:20.619437 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:37:20.619458 systemd[1]: Reached target machines.target - Containers. Jan 20 02:37:20.619479 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:37:20.619496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:37:20.619513 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:37:20.619547 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:37:20.619566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:37:20.619583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:37:20.619601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:37:20.619622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:37:20.619640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:37:20.619658 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:37:20.619675 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:37:20.619693 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:37:20.619713 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:37:20.619733 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:37:20.619751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:37:20.619772 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 20 02:37:20.619791 kernel: audit: type=1334 audit(1768876639.288:102): prog-id=14 op=UNLOAD Jan 20 02:37:20.619807 kernel: audit: type=1334 audit(1768876639.297:103): prog-id=13 op=UNLOAD Jan 20 02:37:20.619823 kernel: audit: type=1334 audit(1768876639.337:104): prog-id=15 op=LOAD Jan 20 02:37:20.619839 kernel: audit: type=1334 audit(1768876639.463:105): prog-id=16 op=LOAD Jan 20 02:37:20.619859 kernel: audit: type=1334 audit(1768876639.519:106): prog-id=17 op=LOAD Jan 20 02:37:20.619876 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:37:20.619893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:37:20.619910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:37:20.619933 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:37:20.619953 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:37:20.619970 kernel: fuse: init (API version 7.41) Jan 20 02:37:20.619987 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:37:20.620047 systemd-journald[1239]: Collecting audit messages is enabled. Jan 20 02:37:20.620084 kernel: audit: type=1305 audit(1768876640.611:107): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 02:37:20.620110 kernel: audit: type=1300 audit(1768876640.611:107): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdbaea2010 a2=4000 a3=0 items=0 ppid=1 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:20.620128 kernel: audit: type=1327 audit(1768876640.611:107): proctitle="/usr/lib/systemd/systemd-journald" Jan 20 02:37:20.620144 systemd-journald[1239]: Journal started Jan 20 02:37:20.620178 systemd-journald[1239]: Runtime Journal (/run/log/journal/b1a04432d6834c4597750e169fd2bb58) is 6M, max 48.2M, 42.1M free. Jan 20 02:37:16.342000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 02:37:18.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:19.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:19.288000 audit: BPF prog-id=14 op=UNLOAD Jan 20 02:37:19.297000 audit: BPF prog-id=13 op=UNLOAD Jan 20 02:37:19.337000 audit: BPF prog-id=15 op=LOAD Jan 20 02:37:19.463000 audit: BPF prog-id=16 op=LOAD Jan 20 02:37:19.519000 audit: BPF prog-id=17 op=LOAD Jan 20 02:37:20.611000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 02:37:20.611000 audit[1239]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdbaea2010 a2=4000 a3=0 items=0 ppid=1 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:20.611000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 02:37:14.208125 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:37:14.292188 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:37:14.305115 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:37:14.306003 systemd[1]: systemd-journald.service: Consumed 4.533s CPU time. Jan 20 02:37:20.848549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:37:20.942713 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:37:20.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.032568 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:37:21.171731 kernel: audit: type=1130 audit(1768876640.998:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.237597 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:37:21.310113 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:37:21.367010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:37:21.424941 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:37:21.472997 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:37:21.547741 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:37:21.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.695875 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:37:21.772545 kernel: audit: type=1130 audit(1768876641.661:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.802994 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:37:21.807973 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:37:21.838142 kernel: ACPI: bus type drm_connector registered Jan 20 02:37:21.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.909988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:37:21.910700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:37:21.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.952702 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:37:21.953113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:37:21.975097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:37:21.975717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:37:21.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:21.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.016012 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:37:22.016661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:37:22.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.070189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:37:22.070918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:37:22.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.113424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:37:22.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.157157 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:37:22.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.185750 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:37:22.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.208041 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:37:22.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.235658 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:37:22.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:22.332539 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:37:22.376980 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 02:37:22.427789 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:37:22.494781 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:37:22.546688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:37:22.546762 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:37:22.636041 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:37:22.732105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:37:22.739712 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:37:22.775811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:37:22.868527 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:37:22.932873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:37:22.951975 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:37:23.019427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:37:23.069955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:37:23.164639 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:37:23.204804 systemd-journald[1239]: Time spent on flushing to /var/log/journal/b1a04432d6834c4597750e169fd2bb58 is 465.093ms for 1166 entries. Jan 20 02:37:23.204804 systemd-journald[1239]: System Journal (/var/log/journal/b1a04432d6834c4597750e169fd2bb58) is 8M, max 163.5M, 155.5M free. Jan 20 02:37:23.779619 systemd-journald[1239]: Received client request to flush runtime journal. Jan 20 02:37:23.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:23.295913 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:37:23.447865 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:37:23.537173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:37:23.677833 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:37:23.809838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:37:23.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:23.904091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:37:23.984036 kernel: loop1: detected capacity change from 0 to 111560 Jan 20 02:37:23.984087 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:37:24.064807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:37:24.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:24.452567 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:37:24.552993 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 20 02:37:24.553150 kernel: audit: type=1130 audit(1768876644.490:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:24.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:24.573733 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 02:37:24.531000 audit: BPF prog-id=18 op=LOAD Jan 20 02:37:24.543000 audit: BPF prog-id=19 op=LOAD Jan 20 02:37:24.543000 audit: BPF prog-id=20 op=LOAD Jan 20 02:37:24.681552 kernel: audit: type=1334 audit(1768876644.531:132): prog-id=18 op=LOAD Jan 20 02:37:24.681656 kernel: audit: type=1334 audit(1768876644.543:133): prog-id=19 op=LOAD Jan 20 02:37:24.681693 kernel: audit: type=1334 audit(1768876644.543:134): prog-id=20 op=LOAD Jan 20 02:37:24.881000 audit: BPF prog-id=21 op=LOAD Jan 20 02:37:24.901090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:37:24.941455 kernel: audit: type=1334 audit(1768876644.881:135): prog-id=21 op=LOAD Jan 20 02:37:24.941587 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 02:37:24.999736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:37:25.041938 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:37:25.054585 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:37:25.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:25.146023 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 02:37:25.123000 audit: BPF prog-id=22 op=LOAD Jan 20 02:37:25.200502 kernel: audit: type=1130 audit(1768876645.093:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:25.200557 kernel: audit: type=1334 audit(1768876645.123:137): prog-id=22 op=LOAD Jan 20 02:37:25.123000 audit: BPF prog-id=23 op=LOAD Jan 20 02:37:25.123000 audit: BPF prog-id=24 op=LOAD Jan 20 02:37:25.204524 kernel: audit: type=1334 audit(1768876645.123:138): prog-id=23 op=LOAD Jan 20 02:37:25.204670 kernel: audit: type=1334 audit(1768876645.123:139): prog-id=24 op=LOAD Jan 20 02:37:25.356590 kernel: audit: type=1334 audit(1768876645.314:140): prog-id=25 op=LOAD Jan 20 02:37:25.314000 audit: BPF prog-id=25 op=LOAD Jan 20 02:37:25.321906 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:37:25.316000 audit: BPF prog-id=26 op=LOAD Jan 20 02:37:25.316000 audit: BPF prog-id=27 op=LOAD Jan 20 02:37:25.471112 kernel: loop3: detected capacity change from 0 to 229808 Jan 20 02:37:25.637087 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 20 02:37:25.643749 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 20 02:37:25.756565 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:37:25.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:26.038518 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 02:37:26.066958 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 02:37:26.207963 kernel: loop4: detected capacity change from 0 to 111560 Jan 20 02:37:26.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:26.406527 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:37:26.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:26.625457 kernel: loop5: detected capacity change from 0 to 50784 Jan 20 02:37:26.840582 kernel: loop6: detected capacity change from 0 to 229808 Jan 20 02:37:27.188063 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 02:37:27.260538 (sd-merge)[1302]: Merged extensions into '/usr'. Jan 20 02:37:27.372758 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:37:27.372850 systemd[1]: Reloading... Jan 20 02:37:27.547556 systemd-oomd[1289]: No swap; memory pressure usage will be degraded Jan 20 02:37:27.583992 systemd-resolved[1290]: Positive Trust Anchors: Jan 20 02:37:27.584086 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:37:27.584094 systemd-resolved[1290]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:37:27.584141 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:37:27.631841 systemd-resolved[1290]: Defaulting to hostname 'linux'. Jan 20 02:37:27.850615 zram_generator::config[1341]: No configuration found. Jan 20 02:37:29.265865 systemd[1]: Reloading finished in 1883 ms. Jan 20 02:37:29.360069 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 02:37:29.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:29.403114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:37:29.447755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:37:29.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:29.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:29.502517 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:37:29.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:29.584104 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 20 02:37:29.584562 kernel: audit: type=1130 audit(1768876649.552:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:29.610677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:37:29.725030 systemd[1]: Starting ensure-sysext.service... Jan 20 02:37:29.767613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:37:29.836000 audit: BPF prog-id=8 op=UNLOAD Jan 20 02:37:29.836000 audit: BPF prog-id=7 op=UNLOAD Jan 20 02:37:29.887687 kernel: audit: type=1334 audit(1768876649.836:150): prog-id=8 op=UNLOAD Jan 20 02:37:29.887792 kernel: audit: type=1334 audit(1768876649.836:151): prog-id=7 op=UNLOAD Jan 20 02:37:29.907858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:37:29.934962 kernel: audit: type=1334 audit(1768876649.893:152): prog-id=28 op=LOAD Jan 20 02:37:29.893000 audit: BPF prog-id=28 op=LOAD Jan 20 02:37:29.970565 kernel: audit: type=1334 audit(1768876649.893:153): prog-id=29 op=LOAD Jan 20 02:37:29.893000 audit: BPF prog-id=29 op=LOAD Jan 20 02:37:30.020000 audit: BPF prog-id=30 op=LOAD Jan 20 02:37:30.020000 audit: BPF prog-id=22 op=UNLOAD Jan 20 02:37:30.020000 audit: BPF prog-id=31 op=LOAD Jan 20 02:37:30.020000 audit: BPF prog-id=32 op=LOAD Jan 20 02:37:30.103690 kernel: audit: type=1334 audit(1768876650.020:154): prog-id=30 op=LOAD Jan 20 02:37:30.103782 kernel: audit: type=1334 audit(1768876650.020:155): prog-id=22 op=UNLOAD Jan 20 02:37:30.103835 kernel: audit: type=1334 audit(1768876650.020:156): prog-id=31 op=LOAD Jan 20 02:37:30.103863 kernel: audit: type=1334 audit(1768876650.020:157): prog-id=32 op=LOAD Jan 20 02:37:30.103884 kernel: audit: type=1334 audit(1768876650.020:158): prog-id=23 op=UNLOAD Jan 20 02:37:30.020000 audit: BPF prog-id=23 op=UNLOAD Jan 20 02:37:30.020000 audit: BPF prog-id=24 op=UNLOAD Jan 20 02:37:30.040000 audit: BPF prog-id=33 op=LOAD Jan 20 02:37:30.040000 audit: BPF prog-id=18 op=UNLOAD Jan 20 02:37:30.040000 audit: BPF prog-id=34 op=LOAD Jan 20 02:37:30.040000 audit: BPF prog-id=35 op=LOAD Jan 20 02:37:30.040000 audit: BPF prog-id=19 op=UNLOAD Jan 20 02:37:30.040000 audit: BPF prog-id=20 op=UNLOAD Jan 20 02:37:30.040000 audit: BPF prog-id=36 op=LOAD Jan 20 02:37:30.040000 audit: BPF prog-id=25 op=UNLOAD Jan 20 02:37:30.040000 audit: BPF prog-id=37 op=LOAD Jan 20 02:37:30.045000 audit: BPF prog-id=38 op=LOAD Jan 20 02:37:30.045000 audit: BPF prog-id=26 op=UNLOAD Jan 20 02:37:30.045000 audit: BPF prog-id=27 op=UNLOAD Jan 20 02:37:30.112000 audit: BPF prog-id=39 op=LOAD Jan 20 02:37:30.112000 audit: BPF prog-id=15 op=UNLOAD Jan 20 02:37:30.112000 audit: BPF prog-id=40 op=LOAD Jan 20 02:37:30.112000 audit: BPF prog-id=41 op=LOAD Jan 20 02:37:30.112000 audit: BPF prog-id=16 op=UNLOAD Jan 20 02:37:30.112000 audit: BPF prog-id=17 op=UNLOAD Jan 20 02:37:30.135585 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:37:30.135726 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:37:30.136000 audit: BPF prog-id=42 op=LOAD Jan 20 02:37:30.151000 audit: BPF prog-id=21 op=UNLOAD Jan 20 02:37:30.149104 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:37:30.166156 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 20 02:37:30.171916 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 20 02:37:30.320573 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:37:30.320615 systemd[1]: Reloading... Jan 20 02:37:30.354806 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:37:30.354818 systemd-tmpfiles[1382]: Skipping /boot Jan 20 02:37:30.649723 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:37:30.649919 systemd-tmpfiles[1382]: Skipping /boot Jan 20 02:37:30.913723 systemd-udevd[1383]: Using default interface naming scheme 'v257'. Jan 20 02:37:31.319059 zram_generator::config[1414]: No configuration found. Jan 20 02:37:32.741709 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:37:33.280536 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:37:33.311399 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:37:33.476663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:37:33.535070 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 02:37:33.619151 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:37:33.622777 systemd[1]: Reloading finished in 3301 ms. Jan 20 02:37:33.697947 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:37:33.794738 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:37:33.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:33.871000 audit: BPF prog-id=43 op=LOAD Jan 20 02:37:33.871000 audit: BPF prog-id=36 op=UNLOAD Jan 20 02:37:33.871000 audit: BPF prog-id=44 op=LOAD Jan 20 02:37:33.871000 audit: BPF prog-id=45 op=LOAD Jan 20 02:37:33.871000 audit: BPF prog-id=37 op=UNLOAD Jan 20 02:37:33.871000 audit: BPF prog-id=38 op=UNLOAD Jan 20 02:37:33.916000 audit: BPF prog-id=46 op=LOAD Jan 20 02:37:33.916000 audit: BPF prog-id=39 op=UNLOAD Jan 20 02:37:33.916000 audit: BPF prog-id=47 op=LOAD Jan 20 02:37:33.916000 audit: BPF prog-id=48 op=LOAD Jan 20 02:37:33.916000 audit: BPF prog-id=40 op=UNLOAD Jan 20 02:37:33.916000 audit: BPF prog-id=41 op=UNLOAD Jan 20 02:37:33.979000 audit: BPF prog-id=49 op=LOAD Jan 20 02:37:33.979000 audit: BPF prog-id=33 op=UNLOAD Jan 20 02:37:33.979000 audit: BPF prog-id=50 op=LOAD Jan 20 02:37:33.979000 audit: BPF prog-id=51 op=LOAD Jan 20 02:37:33.979000 audit: BPF prog-id=34 op=UNLOAD Jan 20 02:37:33.979000 audit: BPF prog-id=35 op=UNLOAD Jan 20 02:37:34.039000 audit: BPF prog-id=52 op=LOAD Jan 20 02:37:34.039000 audit: BPF prog-id=53 op=LOAD Jan 20 02:37:34.039000 audit: BPF prog-id=28 op=UNLOAD Jan 20 02:37:34.039000 audit: BPF prog-id=29 op=UNLOAD Jan 20 02:37:34.048000 audit: BPF prog-id=54 op=LOAD Jan 20 02:37:34.048000 audit: BPF prog-id=30 op=UNLOAD Jan 20 02:37:34.048000 audit: BPF prog-id=55 op=LOAD Jan 20 02:37:34.048000 audit: BPF prog-id=56 op=LOAD Jan 20 02:37:34.048000 audit: BPF prog-id=31 op=UNLOAD Jan 20 02:37:34.048000 audit: BPF prog-id=32 op=UNLOAD Jan 20 02:37:34.167000 audit: BPF prog-id=57 op=LOAD Jan 20 02:37:34.167000 audit: BPF prog-id=42 op=UNLOAD Jan 20 02:37:34.318897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:37:34.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:35.037547 systemd[1]: Finished ensure-sysext.service. Jan 20 02:37:35.181373 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 20 02:37:35.181542 kernel: audit: type=1130 audit(1768876655.083:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:35.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:35.417751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:37:35.437840 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:37:35.489042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:37:35.518102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:37:35.542964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:37:35.625183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:37:35.702169 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:37:35.753911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:37:35.771172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:37:35.775409 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:37:35.853765 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:37:35.922014 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:37:35.964169 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:37:35.999907 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:37:36.038000 audit: BPF prog-id=58 op=LOAD Jan 20 02:37:36.047003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:37:36.070416 kernel: audit: type=1334 audit(1768876656.038:213): prog-id=58 op=LOAD Jan 20 02:37:36.117000 audit: BPF prog-id=59 op=LOAD Jan 20 02:37:36.130936 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:37:36.196114 kernel: audit: type=1334 audit(1768876656.117:214): prog-id=59 op=LOAD Jan 20 02:37:36.229902 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:37:36.248032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:37:36.264145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:37:36.267696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:37:36.300085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:37:36.318579 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:37:36.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.319096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:37:36.355719 kernel: audit: type=1130 audit(1768876656.317:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.355809 kernel: audit: type=1131 audit(1768876656.317:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.449163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:37:36.452163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:37:36.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.526425 kernel: audit: type=1130 audit(1768876656.428:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.530068 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:37:36.531568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:37:36.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.570730 kernel: audit: type=1131 audit(1768876656.428:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.570854 kernel: audit: type=1127 audit(1768876656.444:219): pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.444000 audit[1523]: SYSTEM_BOOT pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.633702 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:37:36.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.673119 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:37:36.684635 augenrules[1529]: No rules Jan 20 02:37:36.761646 kernel: audit: type=1130 audit(1768876656.525:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.761751 kernel: audit: type=1131 audit(1768876656.525:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:36.684000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 02:37:36.684000 audit[1529]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd2781670 a2=420 a3=0 items=0 ppid=1495 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:36.684000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:37:36.783753 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:37:36.784950 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:37:36.792934 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:37:36.875828 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:37:36.904153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:37:36.904922 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:37:36.904968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:37:37.387544 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:37:37.556778 systemd-networkd[1517]: lo: Link UP Jan 20 02:37:37.556858 systemd-networkd[1517]: lo: Gained carrier Jan 20 02:37:37.562893 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:37:37.562900 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:37:37.566307 systemd-networkd[1517]: eth0: Link UP Jan 20 02:37:37.568070 systemd-networkd[1517]: eth0: Gained carrier Jan 20 02:37:37.568093 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:37:37.719060 systemd-networkd[1517]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:37:37.731601 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 20 02:37:37.748050 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:37:37.748163 systemd-timesyncd[1519]: Initial clock synchronization to Tue 2026-01-20 02:37:37.887222 UTC. Jan 20 02:37:39.087740 systemd-networkd[1517]: eth0: Gained IPv6LL Jan 20 02:37:41.744765 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:37:41.749523 systemd[1]: Reached target network.target - Network. Jan 20 02:37:41.817927 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:37:41.891758 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:37:41.923670 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:37:42.033671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:37:42.226772 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:37:42.342598 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:37:42.501766 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:37:43.012645 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:37:43.142967 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:37:43.535656 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:37:43.818668 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:37:43.881031 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:37:43.986063 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:37:44.096751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:37:44.182979 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:37:44.260083 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:37:44.331428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:37:44.381595 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 02:37:44.426178 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 02:37:44.487757 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:37:44.546065 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:37:44.546473 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:37:44.614472 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:37:44.686113 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:37:44.761556 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:37:44.826049 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:37:44.888863 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:37:44.956787 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:37:45.054983 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:37:45.137840 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:37:45.243562 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:37:45.319467 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:37:45.390428 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:37:45.446778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:37:45.446921 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:37:45.472709 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:37:45.575824 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:37:45.757926 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:37:45.847687 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:37:45.946467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:37:46.045826 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:37:46.118544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:37:46.162591 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:37:46.209005 jq[1567]: false Jan 20 02:37:46.257808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:37:46.341753 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:37:46.379589 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 20 02:37:46.380089 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 20 02:37:46.427449 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:37:46.495502 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 20 02:37:46.495502 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:37:46.491831 oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 20 02:37:46.491865 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:37:46.520678 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 20 02:37:46.515590 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:37:46.508493 oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 20 02:37:46.535046 extend-filesystems[1568]: Found /dev/vda6 Jan 20 02:37:46.603029 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:37:46.615660 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 20 02:37:46.641620 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 20 02:37:46.641620 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:37:46.646150 extend-filesystems[1568]: Found /dev/vda9 Jan 20 02:37:46.615685 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:37:46.760955 extend-filesystems[1568]: Checking size of /dev/vda9 Jan 20 02:37:46.820127 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:37:46.863359 extend-filesystems[1568]: Resized partition /dev/vda9 Jan 20 02:37:47.020424 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:37:47.309111 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 02:37:47.042890 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:37:47.190850 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:37:47.194949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:37:47.238611 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:37:47.412161 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:37:47.784056 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:37:47.874178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:37:47.879492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:37:47.885463 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:37:47.889496 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:37:47.976900 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:37:47.981827 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:37:48.084071 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:37:48.084737 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:37:48.179457 jq[1596]: true Jan 20 02:37:48.189949 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 02:37:48.191730 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:37:48.311476 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:37:48.311476 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:37:48.311476 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 02:37:48.366905 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:37:48.637747 update_engine[1593]: I20260120 02:37:48.379012 1593 main.cc:92] Flatcar Update Engine starting Jan 20 02:37:48.638185 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Jan 20 02:37:48.376960 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:37:48.715723 jq[1612]: true Jan 20 02:37:48.808504 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:37:48.829751 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:37:48.980117 tar[1610]: linux-amd64/LICENSE Jan 20 02:37:48.980117 tar[1610]: linux-amd64/helm Jan 20 02:37:49.301078 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:37:49.727886 bash[1650]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:37:49.747499 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:37:49.793533 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:37:50.112789 dbus-daemon[1565]: [system] SELinux support is enabled Jan 20 02:37:50.147665 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:37:50.302815 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:37:50.326714 update_engine[1593]: I20260120 02:37:50.324182 1593 update_check_scheduler.cc:74] Next update check in 3m25s Jan 20 02:37:50.382439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:37:50.382496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:37:50.444800 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:37:50.460472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:37:50.475813 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:37:50.614997 systemd-logind[1588]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:37:50.627437 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:37:50.633178 systemd-logind[1588]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:37:50.633679 systemd-logind[1588]: New seat seat0. Jan 20 02:37:50.691451 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:37:50.900170 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:37:51.009952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:37:51.162908 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:37:51.254138 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:57490.service - OpenSSH per-connection server daemon (10.0.0.1:57490). Jan 20 02:37:51.591362 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:37:51.593691 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:37:51.774869 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:37:52.195162 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:37:52.255974 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:37:52.337660 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:37:52.358806 locksmithd[1664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:37:52.376713 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:37:52.990709 containerd[1621]: time="2026-01-20T02:37:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:37:53.029551 containerd[1621]: time="2026-01-20T02:37:53.023446289Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 02:37:53.273079 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 57490 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:53.295585 containerd[1621]: time="2026-01-20T02:37:53.290407015Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.62µs" Jan 20 02:37:53.295808 containerd[1621]: time="2026-01-20T02:37:53.295767159Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:37:53.295961 containerd[1621]: time="2026-01-20T02:37:53.295935537Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:37:53.296167 containerd[1621]: time="2026-01-20T02:37:53.296134554Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:37:53.296796 containerd[1621]: time="2026-01-20T02:37:53.296762452Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:37:53.296887 containerd[1621]: time="2026-01-20T02:37:53.296869444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:37:53.297060 containerd[1621]: time="2026-01-20T02:37:53.297035832Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:37:53.297133 containerd[1621]: time="2026-01-20T02:37:53.297117332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.297797 containerd[1621]: time="2026-01-20T02:37:53.297770069Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.297878 containerd[1621]: time="2026-01-20T02:37:53.297858897Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:37:53.297969 containerd[1621]: time="2026-01-20T02:37:53.297940639Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:37:53.298056 containerd[1621]: time="2026-01-20T02:37:53.298031758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.377718 containerd[1621]: time="2026-01-20T02:37:53.372694745Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.377718 containerd[1621]: time="2026-01-20T02:37:53.372850458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:37:53.377718 containerd[1621]: time="2026-01-20T02:37:53.373066703Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.420387 containerd[1621]: time="2026-01-20T02:37:53.419691477Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.420387 containerd[1621]: time="2026-01-20T02:37:53.420089902Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:37:53.420387 containerd[1621]: time="2026-01-20T02:37:53.420120670Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:37:53.428786 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:53.435421 containerd[1621]: time="2026-01-20T02:37:53.434434875Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:37:53.435421 containerd[1621]: time="2026-01-20T02:37:53.434728579Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:37:53.435421 containerd[1621]: time="2026-01-20T02:37:53.434865094Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:37:53.579552 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.638760566Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.638945540Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639076596Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639100871Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639121115Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639138133Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639154146Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639168078Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639185588Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639435456Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639458043Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639474076Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639491003Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:37:53.736554 containerd[1621]: time="2026-01-20T02:37:53.639507840Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:37:53.651103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639835912Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639865053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639884774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639898826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639912145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639924248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639939838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639956302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639970375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.639985724Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.640005467Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.640039100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.640116740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.640146796Z" level=info msg="Start snapshots syncer" Jan 20 02:37:53.741855 containerd[1621]: time="2026-01-20T02:37:53.640172880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:37:53.750440 containerd[1621]: time="2026-01-20T02:37:53.750132948Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.750795747Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.750874644Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768191474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768453626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768476363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768494970Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768515183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768530422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768545851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768561642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768581344Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768631071Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768654150Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:37:53.790793 containerd[1621]: time="2026-01-20T02:37:53.768667760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768681190Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768695836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768710772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768726865Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768761202Z" level=info msg="runtime interface created" Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768772812Z" level=info msg="created NRI interface" Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768788814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768812688Z" level=info msg="Connect containerd service" Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.768849778Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:37:53.791653 containerd[1621]: time="2026-01-20T02:37:53.774679224Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:37:54.122580 systemd-logind[1588]: New session 1 of user core. Jan 20 02:37:54.635608 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:37:54.834829 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:37:55.430145 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:55.594051 systemd-logind[1588]: New session 2 of user core. Jan 20 02:37:55.771994 tar[1610]: linux-amd64/README.md Jan 20 02:37:56.998666 containerd[1621]: time="2026-01-20T02:37:56.996451385Z" level=info msg="Start subscribing containerd event" Jan 20 02:37:57.042177 containerd[1621]: time="2026-01-20T02:37:57.041827531Z" level=info msg="Start recovering state" Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091592638Z" level=info msg="Start event monitor" Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091636100Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091657001Z" level=info msg="Start streaming server" Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091667870Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091678426Z" level=info msg="runtime interface starting up..." Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091686763Z" level=info msg="starting plugins..." Jan 20 02:37:57.154420 containerd[1621]: time="2026-01-20T02:37:57.091707454Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:37:57.163493 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:37:57.342174 containerd[1621]: time="2026-01-20T02:37:57.200859035Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:37:57.342174 containerd[1621]: time="2026-01-20T02:37:57.201436009Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:37:57.342174 containerd[1621]: time="2026-01-20T02:37:57.201602494Z" level=info msg="containerd successfully booted in 4.217942s" Jan 20 02:37:57.395939 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:38:01.281671 systemd[1697]: Queued start job for default target default.target. Jan 20 02:38:01.527848 systemd[1697]: Created slice app.slice - User Application Slice. Jan 20 02:38:01.530352 systemd[1697]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 02:38:01.530386 systemd[1697]: Reached target paths.target - Paths. Jan 20 02:38:01.530494 systemd[1697]: Reached target timers.target - Timers. Jan 20 02:38:01.613926 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:38:01.798135 systemd[1697]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 02:38:02.587587 systemd[1697]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 02:38:02.776759 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:38:02.777085 systemd[1697]: Reached target sockets.target - Sockets. Jan 20 02:38:02.777162 systemd[1697]: Reached target basic.target - Basic System. Jan 20 02:38:02.789760 systemd[1697]: Reached target default.target - Main User Target. Jan 20 02:38:02.789835 systemd[1697]: Startup finished in 6.847s. Jan 20 02:38:02.794709 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:38:03.279695 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:38:04.843685 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:39530.service - OpenSSH per-connection server daemon (10.0.0.1:39530). Jan 20 02:38:14.683644 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 8938811998 wd_nsec: 8938808297 Jan 20 02:38:18.656804 kernel: kvm_amd: TSC scaling supported Jan 20 02:38:18.675530 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:38:18.728722 kernel: kvm_amd: Nested Paging enabled Jan 20 02:38:18.759424 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:38:18.759809 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:38:18.993758 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 39530 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:19.062497 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:19.178380 systemd-logind[1588]: New session 3 of user core. Jan 20 02:38:19.370834 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:38:20.581490 sshd[1728]: Connection closed by 10.0.0.1 port 39530 Jan 20 02:38:20.616424 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:20.867075 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:54012.service - OpenSSH per-connection server daemon (10.0.0.1:54012). Jan 20 02:38:20.907778 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:39530.service: Deactivated successfully. Jan 20 02:38:20.930759 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:38:20.947177 systemd-logind[1588]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:38:21.023090 systemd-logind[1588]: Removed session 3. Jan 20 02:38:23.546916 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 54012 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:23.587330 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:23.660103 systemd-logind[1588]: New session 4 of user core. Jan 20 02:38:23.758921 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:38:24.659500 sshd[1738]: Connection closed by 10.0.0.1 port 54012 Jan 20 02:38:24.729397 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:25.583505 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:54012.service: Deactivated successfully. Jan 20 02:38:25.616520 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:38:25.665119 systemd-logind[1588]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:38:25.686774 systemd-logind[1588]: Removed session 4. Jan 20 02:38:29.791564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:38:29.795144 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:38:29.803755 systemd[1]: Startup finished in 49.384s (kernel) + 1min 16.229s (initrd) + 1min 21.980s (userspace) = 3min 27.595s. Jan 20 02:38:29.888026 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:38:31.736100 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:38:35.309811 update_engine[1593]: I20260120 02:38:35.222837 1593 update_attempter.cc:509] Updating boot flags... Jan 20 02:38:35.926783 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:35292.service - OpenSSH per-connection server daemon (10.0.0.1:35292). Jan 20 02:38:37.082943 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 35292 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:37.102766 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:37.204403 systemd-logind[1588]: New session 5 of user core. Jan 20 02:38:37.301525 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:38:38.784338 sshd[1777]: Connection closed by 10.0.0.1 port 35292 Jan 20 02:38:38.783960 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:39.107797 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:35302.service - OpenSSH per-connection server daemon (10.0.0.1:35302). Jan 20 02:38:39.141501 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:35292.service: Deactivated successfully. Jan 20 02:38:39.183802 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:38:39.472633 systemd-logind[1588]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:38:39.770750 systemd-logind[1588]: Removed session 5. Jan 20 02:38:39.950547 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 35302 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:39.961964 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:40.301135 systemd-logind[1588]: New session 6 of user core. Jan 20 02:38:40.350564 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:38:40.415524 sshd[1790]: Connection closed by 10.0.0.1 port 35302 Jan 20 02:38:40.420619 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:40.465793 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:35302.service: Deactivated successfully. Jan 20 02:38:40.486656 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:38:40.491760 systemd-logind[1588]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:38:40.566721 systemd-logind[1588]: Removed session 6. Jan 20 02:38:40.588170 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:35312.service - OpenSSH per-connection server daemon (10.0.0.1:35312). Jan 20 02:38:40.689780 kubelet[1748]: E0120 02:38:40.683467 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:38:40.727053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:38:40.728948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:38:40.739987 systemd[1]: kubelet.service: Consumed 14.248s CPU time, 271.8M memory peak. Jan 20 02:38:41.097970 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 35312 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:41.107586 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:41.172412 systemd-logind[1588]: New session 7 of user core. Jan 20 02:38:41.237727 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:38:41.632919 sshd[1801]: Connection closed by 10.0.0.1 port 35312 Jan 20 02:38:41.610728 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:41.733732 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:35312.service: Deactivated successfully. Jan 20 02:38:41.772163 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:38:41.791997 systemd-logind[1588]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:38:42.013973 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:35328.service - OpenSSH per-connection server daemon (10.0.0.1:35328). Jan 20 02:38:42.063896 systemd-logind[1588]: Removed session 7. Jan 20 02:38:43.967097 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 35328 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:44.010388 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:44.190028 systemd-logind[1588]: New session 8 of user core. Jan 20 02:38:44.473961 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:38:45.214576 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:38:45.215390 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:38:50.845570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:38:50.912733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:39:05.340328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:39:05.499454 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:39:06.934611 kubelet[1841]: E0120 02:39:06.932461 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:39:06.973137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:39:06.975683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:39:06.981723 systemd[1]: kubelet.service: Consumed 2.913s CPU time, 108.3M memory peak. Jan 20 02:39:07.837971 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:39:07.944105 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:39:17.212126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:39:17.327569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:39:26.477765 dockerd[1849]: time="2026-01-20T02:39:26.468182816Z" level=info msg="Starting up" Jan 20 02:39:26.489603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:39:26.497980 dockerd[1849]: time="2026-01-20T02:39:26.497928946Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:39:26.597567 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:39:26.762426 dockerd[1849]: time="2026-01-20T02:39:26.761713560Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:39:27.323438 kubelet[1869]: E0120 02:39:27.319864 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:39:27.393069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:39:27.394968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:39:27.405589 systemd[1]: kubelet.service: Consumed 1.423s CPU time, 110.8M memory peak. Jan 20 02:39:28.030421 dockerd[1849]: time="2026-01-20T02:39:28.027413270Z" level=info msg="Loading containers: start." Jan 20 02:39:28.270756 kernel: Initializing XFRM netlink socket Jan 20 02:39:38.313761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:39:39.627084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:39:49.886501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:39:50.036869 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:39:51.290810 systemd-networkd[1517]: docker0: Link UP Jan 20 02:39:51.437609 dockerd[1849]: time="2026-01-20T02:39:51.422601074Z" level=info msg="Loading containers: done." Jan 20 02:39:51.528777 kubelet[2029]: E0120 02:39:51.528605 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:39:51.604514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:39:51.605341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:39:51.618782 systemd[1]: kubelet.service: Consumed 1.994s CPU time, 110.7M memory peak. Jan 20 02:39:51.920846 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4131422277-merged.mount: Deactivated successfully. Jan 20 02:39:51.996581 dockerd[1849]: time="2026-01-20T02:39:51.987580198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:39:52.029179 dockerd[1849]: time="2026-01-20T02:39:52.027002387Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:39:52.037653 dockerd[1849]: time="2026-01-20T02:39:52.032445849Z" level=info msg="Initializing buildkit" Jan 20 02:39:53.190697 dockerd[1849]: time="2026-01-20T02:39:53.187600211Z" level=info msg="Completed buildkit initialization" Jan 20 02:39:53.304986 dockerd[1849]: time="2026-01-20T02:39:53.304544809Z" level=info msg="Daemon has completed initialization" Jan 20 02:39:53.313834 dockerd[1849]: time="2026-01-20T02:39:53.307846395Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:39:53.327461 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:40:00.694471 containerd[1621]: time="2026-01-20T02:40:00.692064522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 02:40:01.837798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:40:01.875769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:40:03.700169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:40:03.816025 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:40:04.500040 kubelet[2115]: E0120 02:40:04.499595 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:40:04.538596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:40:04.538888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:40:04.543146 systemd[1]: kubelet.service: Consumed 617ms CPU time, 110.3M memory peak. Jan 20 02:40:04.819065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808842151.mount: Deactivated successfully. Jan 20 02:40:14.579878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:40:14.600007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:40:15.637008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:40:15.657014 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:40:15.976160 kubelet[2188]: E0120 02:40:15.975579 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:40:15.987700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:40:15.987968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:40:15.989869 systemd[1]: kubelet.service: Consumed 405ms CPU time, 110.9M memory peak. Jan 20 02:40:17.566989 containerd[1621]: time="2026-01-20T02:40:17.566797567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:17.574949 containerd[1621]: time="2026-01-20T02:40:17.574479170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30106591" Jan 20 02:40:17.581191 containerd[1621]: time="2026-01-20T02:40:17.580988551Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:17.603415 containerd[1621]: time="2026-01-20T02:40:17.602171600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:17.613094 containerd[1621]: time="2026-01-20T02:40:17.610335058Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 16.917605355s" Jan 20 02:40:17.613094 containerd[1621]: time="2026-01-20T02:40:17.610414937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 02:40:17.614080 containerd[1621]: time="2026-01-20T02:40:17.613660561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 02:40:26.388779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:40:26.990568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:40:35.000770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:40:35.130431 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:40:38.011322 kubelet[2209]: E0120 02:40:37.998954 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:40:38.093548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:40:38.106408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:40:38.112803 systemd[1]: kubelet.service: Consumed 5.638s CPU time, 110.3M memory peak. Jan 20 02:40:38.363506 containerd[1621]: time="2026-01-20T02:40:38.362525358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:38.379417 containerd[1621]: time="2026-01-20T02:40:38.378658068Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 20 02:40:38.390509 containerd[1621]: time="2026-01-20T02:40:38.390418881Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:38.415010 containerd[1621]: time="2026-01-20T02:40:38.414916915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:38.612507 containerd[1621]: time="2026-01-20T02:40:38.611911052Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 20.980080952s" Jan 20 02:40:38.827487 containerd[1621]: time="2026-01-20T02:40:38.631602896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 02:40:38.844024 containerd[1621]: time="2026-01-20T02:40:38.829593139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 02:40:48.115663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:40:48.174605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:40:52.565082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:40:52.651658 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:40:53.124552 kubelet[2232]: E0120 02:40:53.120392 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:40:53.155911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:40:53.162564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:40:53.163711 systemd[1]: kubelet.service: Consumed 1.260s CPU time, 110.4M memory peak. Jan 20 02:40:56.282852 containerd[1621]: time="2026-01-20T02:40:56.280501830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:56.292906 containerd[1621]: time="2026-01-20T02:40:56.292849262Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 20 02:40:56.303640 containerd[1621]: time="2026-01-20T02:40:56.300484455Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:56.322822 containerd[1621]: time="2026-01-20T02:40:56.322699546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:40:56.338304 containerd[1621]: time="2026-01-20T02:40:56.333711883Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 17.504055397s" Jan 20 02:40:56.338304 containerd[1621]: time="2026-01-20T02:40:56.337607248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 02:40:56.356275 containerd[1621]: time="2026-01-20T02:40:56.352657618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 02:41:03.916591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:41:05.112596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:41:09.703049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:41:09.773999 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:41:10.226050 kubelet[2252]: E0120 02:41:10.225975 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:41:10.246083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:41:10.246602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:41:10.249610 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 108.2M memory peak. Jan 20 02:41:10.789697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121416789.mount: Deactivated successfully. Jan 20 02:41:15.287915 update_engine[1593]: I20260120 02:41:15.278070 1593 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:41:15.407112 update_engine[1593]: I20260120 02:41:15.292559 1593 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:41:15.407112 update_engine[1593]: I20260120 02:41:15.405027 1593 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:41:15.435854 update_engine[1593]: I20260120 02:41:15.432502 1593 omaha_request_params.cc:62] Current group set to alpha Jan 20 02:41:15.467883 update_engine[1593]: I20260120 02:41:15.447180 1593 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:41:15.490046 update_engine[1593]: I20260120 02:41:15.480477 1593 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:41:15.569577 locksmithd[1664]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:41:15.579402 update_engine[1593]: I20260120 02:41:15.532928 1593 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:41:15.579402 update_engine[1593]: I20260120 02:41:15.560021 1593 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:41:15.579402 update_engine[1593]: I20260120 02:41:15.577157 1593 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:41:15.579402 update_engine[1593]: I20260120 02:41:15.577185 1593 omaha_request_action.cc:272] Request: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: Jan 20 02:41:15.579402 update_engine[1593]: I20260120 02:41:15.577353 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:41:15.624416 update_engine[1593]: I20260120 02:41:15.623636 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:41:15.677366 update_engine[1593]: I20260120 02:41:15.646404 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:41:15.677366 update_engine[1593]: E20260120 02:41:15.676587 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:41:15.677366 update_engine[1593]: I20260120 02:41:15.677183 1593 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:41:20.509108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:41:20.580809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:41:24.738155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:41:24.801936 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:41:26.338716 update_engine[1593]: I20260120 02:41:26.233162 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:41:26.338716 update_engine[1593]: I20260120 02:41:26.233984 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:41:26.338716 update_engine[1593]: I20260120 02:41:26.330036 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:41:26.401158 update_engine[1593]: E20260120 02:41:26.396952 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:41:26.401158 update_engine[1593]: I20260120 02:41:26.400127 1593 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:41:28.512112 kubelet[2273]: E0120 02:41:28.505449 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:41:28.542320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:41:28.547045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:41:28.548152 systemd[1]: kubelet.service: Consumed 1.830s CPU time, 110.2M memory peak. Jan 20 02:41:32.120703 containerd[1621]: time="2026-01-20T02:41:32.094030108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:41:32.176357 containerd[1621]: time="2026-01-20T02:41:32.143903318Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 20 02:41:32.200495 containerd[1621]: time="2026-01-20T02:41:32.189508799Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:41:32.271607 containerd[1621]: time="2026-01-20T02:41:32.270847317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:41:32.279786 containerd[1621]: time="2026-01-20T02:41:32.275828431Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 35.923123917s" Jan 20 02:41:32.279786 containerd[1621]: time="2026-01-20T02:41:32.275933296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 02:41:32.303728 containerd[1621]: time="2026-01-20T02:41:32.299031326Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 02:41:36.221621 update_engine[1593]: I20260120 02:41:36.220918 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:41:36.221621 update_engine[1593]: I20260120 02:41:36.398272 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:41:36.782007 update_engine[1593]: I20260120 02:41:36.689028 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:41:36.782007 update_engine[1593]: E20260120 02:41:36.746119 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:41:36.782007 update_engine[1593]: I20260120 02:41:36.746268 1593 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:41:37.140820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812906966.mount: Deactivated successfully. Jan 20 02:41:38.606909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:41:38.659847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:41:46.010418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:41:46.293117 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:41:47.221983 update_engine[1593]: I20260120 02:41:47.220741 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:41:47.221983 update_engine[1593]: I20260120 02:41:47.220875 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:41:47.221983 update_engine[1593]: I20260120 02:41:47.221832 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:41:47.280958 update_engine[1593]: E20260120 02:41:47.279423 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.279662 1593 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.279685 1593 omaha_request_action.cc:617] Omaha request response: Jan 20 02:41:47.280958 update_engine[1593]: E20260120 02:41:47.279874 1593 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.279910 1593 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.279920 1593 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.279929 1593 update_attempter.cc:306] Processing Done. Jan 20 02:41:47.280958 update_engine[1593]: E20260120 02:41:47.280014 1593 update_attempter.cc:619] Update failed. Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.280028 1593 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.280037 1593 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.280046 1593 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.280143 1593 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.280184 1593 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:41:47.280958 update_engine[1593]: I20260120 02:41:47.280328 1593 omaha_request_action.cc:272] Request: Jan 20 02:41:47.280958 update_engine[1593]: Jan 20 02:41:47.280958 update_engine[1593]: Jan 20 02:41:47.282680 update_engine[1593]: Jan 20 02:41:47.282680 update_engine[1593]: Jan 20 02:41:47.282680 update_engine[1593]: Jan 20 02:41:47.282680 update_engine[1593]: Jan 20 02:41:47.282680 update_engine[1593]: I20260120 02:41:47.280344 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:41:47.282680 update_engine[1593]: I20260120 02:41:47.280377 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:41:47.282680 update_engine[1593]: I20260120 02:41:47.280908 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:41:47.308380 locksmithd[1664]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:41:47.318059 update_engine[1593]: E20260120 02:41:47.317986 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326366 1593 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326435 1593 omaha_request_action.cc:617] Omaha request response: Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326530 1593 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326545 1593 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326556 1593 update_attempter.cc:306] Processing Done. Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326570 1593 update_attempter.cc:310] Error event sent. Jan 20 02:41:47.326893 update_engine[1593]: I20260120 02:41:47.326588 1593 update_check_scheduler.cc:74] Next update check in 44m20s Jan 20 02:41:47.328754 locksmithd[1664]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:41:47.660541 kubelet[2316]: E0120 02:41:47.619934 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:41:47.732681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:41:47.756561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:41:47.806178 systemd[1]: kubelet.service: Consumed 1.795s CPU time, 109M memory peak. Jan 20 02:41:58.206181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:41:58.425916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:42:03.477930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:42:03.570154 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:42:04.389641 kubelet[2361]: E0120 02:42:04.389568 2361 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:42:04.398998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:42:04.402666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:42:04.403525 systemd[1]: kubelet.service: Consumed 1.384s CPU time, 112.3M memory peak. Jan 20 02:42:05.900316 containerd[1621]: time="2026-01-20T02:42:05.896906762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:42:05.917794 containerd[1621]: time="2026-01-20T02:42:05.917729510Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931441" Jan 20 02:42:05.929416 containerd[1621]: time="2026-01-20T02:42:05.929357913Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:42:05.981916 containerd[1621]: time="2026-01-20T02:42:05.981827828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:42:06.002796 containerd[1621]: time="2026-01-20T02:42:06.002134426Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 33.699365149s" Jan 20 02:42:06.002796 containerd[1621]: time="2026-01-20T02:42:06.002331483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 02:42:06.019334 containerd[1621]: time="2026-01-20T02:42:06.018970210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 02:42:07.277572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829363563.mount: Deactivated successfully. Jan 20 02:42:07.389397 containerd[1621]: time="2026-01-20T02:42:07.388175131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:42:07.397752 containerd[1621]: time="2026-01-20T02:42:07.396784471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 02:42:07.404314 containerd[1621]: time="2026-01-20T02:42:07.402610621Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:42:07.425371 containerd[1621]: time="2026-01-20T02:42:07.425171340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:42:07.426755 containerd[1621]: time="2026-01-20T02:42:07.426725391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.407706931s" Jan 20 02:42:07.426893 containerd[1621]: time="2026-01-20T02:42:07.426868878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 02:42:07.439099 containerd[1621]: time="2026-01-20T02:42:07.438756537Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 02:42:15.508938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 02:42:15.718976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:42:16.166533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422777696.mount: Deactivated successfully. Jan 20 02:42:23.669984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:42:23.741796 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:42:25.671435 kubelet[2406]: E0120 02:42:25.671134 2406 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:42:25.728999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:42:25.761525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:42:25.840550 systemd[1]: kubelet.service: Consumed 2.651s CPU time, 108.8M memory peak. Jan 20 02:42:35.842898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 02:42:35.870777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:42:39.248173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:42:39.323108 (kubelet)[2459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:42:39.969916 kubelet[2459]: E0120 02:42:39.968709 2459 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:42:39.985087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:42:39.985472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:42:39.986152 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 108.4M memory peak. Jan 20 02:42:49.967026 containerd[1621]: time="2026-01-20T02:42:49.945536598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:42:50.001625 containerd[1621]: time="2026-01-20T02:42:50.000440255Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58916088" Jan 20 02:42:50.094154 containerd[1621]: time="2026-01-20T02:42:50.093722028Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:42:50.165885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 02:42:50.199917 containerd[1621]: time="2026-01-20T02:42:50.199690969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:42:50.205139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:42:50.263662 containerd[1621]: time="2026-01-20T02:42:50.247693369Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 42.808883213s" Jan 20 02:42:50.263662 containerd[1621]: time="2026-01-20T02:42:50.247814776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 02:42:54.978906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:42:55.137346 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:42:58.565565 kubelet[2499]: E0120 02:42:58.564917 2499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:42:58.586929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:42:58.587397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:42:58.594471 systemd[1]: kubelet.service: Consumed 1.835s CPU time, 110.7M memory peak. Jan 20 02:42:59.536040 systemd[1697]: Created slice background.slice - User Background Tasks Slice. Jan 20 02:42:59.563166 systemd[1697]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 02:42:59.671419 systemd[1697]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 02:43:08.830529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 02:43:08.856954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:43:10.359392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:43:10.432025 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:43:10.893352 kubelet[2521]: E0120 02:43:10.889184 2521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:43:10.913546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:43:10.914429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:43:10.915076 systemd[1]: kubelet.service: Consumed 552ms CPU time, 110.5M memory peak. Jan 20 02:43:14.525939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:43:14.529178 systemd[1]: kubelet.service: Consumed 552ms CPU time, 110.5M memory peak. Jan 20 02:43:14.545480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:43:14.810069 systemd[1]: Reload requested from client PID 2537 ('systemctl') (unit session-8.scope)... Jan 20 02:43:14.813445 systemd[1]: Reloading... Jan 20 02:43:15.642382 zram_generator::config[2584]: No configuration found. Jan 20 02:43:17.357058 systemd[1]: Reloading finished in 2542 ms. Jan 20 02:43:17.901406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:43:17.926026 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:43:17.934541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:43:17.935630 systemd[1]: kubelet.service: Consumed 281ms CPU time, 98.7M memory peak. Jan 20 02:43:17.959653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:43:19.235820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:43:19.330851 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:43:19.950078 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:43:19.950078 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:43:19.950078 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:43:19.955577 kubelet[2632]: I0120 02:43:19.955003 2632 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:43:24.470406 kubelet[2632]: I0120 02:43:24.463023 2632 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 02:43:24.477343 kubelet[2632]: I0120 02:43:24.476568 2632 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:43:24.477620 kubelet[2632]: I0120 02:43:24.477599 2632 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:43:24.775832 kubelet[2632]: E0120 02:43:24.775098 2632 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:43:24.791824 kubelet[2632]: I0120 02:43:24.786848 2632 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:43:24.889551 kubelet[2632]: I0120 02:43:24.886665 2632 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:43:24.972721 kubelet[2632]: I0120 02:43:24.971052 2632 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:43:24.972721 kubelet[2632]: I0120 02:43:24.971649 2632 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:43:24.979013 kubelet[2632]: I0120 02:43:24.971685 2632 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:43:24.979013 kubelet[2632]: I0120 02:43:24.976991 2632 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:43:24.979013 kubelet[2632]: I0120 02:43:24.977013 2632 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 02:43:24.987996 kubelet[2632]: I0120 02:43:24.986905 2632 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:43:25.018935 kubelet[2632]: I0120 02:43:25.015318 2632 kubelet.go:480] "Attempting to sync node with API server" Jan 20 02:43:25.018935 kubelet[2632]: I0120 02:43:25.015642 2632 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:43:25.018935 kubelet[2632]: I0120 02:43:25.015912 2632 kubelet.go:386] "Adding apiserver pod source" Jan 20 02:43:25.018935 kubelet[2632]: I0120 02:43:25.016014 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:43:25.060961 kubelet[2632]: E0120 02:43:25.055662 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:43:25.081379 kubelet[2632]: E0120 02:43:25.075343 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:43:25.105500 kubelet[2632]: I0120 02:43:25.103364 2632 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:43:25.129133 kubelet[2632]: I0120 02:43:25.128442 2632 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:43:25.153275 kubelet[2632]: W0120 02:43:25.150602 2632 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:43:25.254060 kubelet[2632]: I0120 02:43:25.250699 2632 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:43:25.254060 kubelet[2632]: I0120 02:43:25.253316 2632 server.go:1289] "Started kubelet" Jan 20 02:43:25.269038 kubelet[2632]: I0120 02:43:25.265164 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:43:25.284279 kubelet[2632]: I0120 02:43:25.281355 2632 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:43:25.302349 kubelet[2632]: I0120 02:43:25.301518 2632 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:43:25.328419 kubelet[2632]: I0120 02:43:25.316479 2632 server.go:317] "Adding debug handlers to kubelet server" Jan 20 02:43:25.328419 kubelet[2632]: I0120 02:43:25.323053 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:43:25.352739 kubelet[2632]: I0120 02:43:25.352707 2632 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:43:25.364456 kubelet[2632]: E0120 02:43:25.357920 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:25.364456 kubelet[2632]: I0120 02:43:25.358430 2632 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:43:25.394025 kubelet[2632]: I0120 02:43:25.390827 2632 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:43:25.394025 kubelet[2632]: I0120 02:43:25.391014 2632 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:43:25.394025 kubelet[2632]: E0120 02:43:25.392531 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:43:25.394025 kubelet[2632]: E0120 02:43:25.392654 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Jan 20 02:43:25.405378 kubelet[2632]: E0120 02:43:25.404835 2632 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:43:25.415078 kubelet[2632]: I0120 02:43:25.408952 2632 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:43:25.415078 kubelet[2632]: I0120 02:43:25.409090 2632 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:43:25.431465 kubelet[2632]: E0120 02:43:25.415506 2632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c5043ad77b2c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:43:25.253096136 +0000 UTC m=+5.884650506,LastTimestamp:2026-01-20 02:43:25.253096136 +0000 UTC m=+5.884650506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:43:25.446833 kubelet[2632]: I0120 02:43:25.433312 2632 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:43:25.481155 kubelet[2632]: E0120 02:43:25.477008 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:25.590412 kubelet[2632]: E0120 02:43:25.582371 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:25.613361 kubelet[2632]: E0120 02:43:25.612464 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Jan 20 02:43:25.684907 kubelet[2632]: E0120 02:43:25.684179 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:25.796848 kubelet[2632]: E0120 02:43:25.795314 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:25.853485 kubelet[2632]: I0120 02:43:25.848502 2632 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:43:25.853485 kubelet[2632]: I0120 02:43:25.848529 2632 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:43:25.853485 kubelet[2632]: I0120 02:43:25.848554 2632 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:43:25.895924 kubelet[2632]: E0120 02:43:25.895860 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:25.898881 kubelet[2632]: I0120 02:43:25.898852 2632 policy_none.go:49] "None policy: Start" Jan 20 02:43:25.899140 kubelet[2632]: I0120 02:43:25.899120 2632 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:43:25.899479 kubelet[2632]: I0120 02:43:25.899462 2632 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:43:26.000641 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:43:26.017915 kubelet[2632]: E0120 02:43:26.017878 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:26.020575 kubelet[2632]: E0120 02:43:26.019177 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Jan 20 02:43:26.037447 kubelet[2632]: I0120 02:43:26.037391 2632 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 02:43:26.077673 kubelet[2632]: I0120 02:43:26.077625 2632 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 02:43:26.084947 kubelet[2632]: I0120 02:43:26.078006 2632 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 02:43:26.101123 kubelet[2632]: I0120 02:43:26.086127 2632 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:43:26.101123 kubelet[2632]: I0120 02:43:26.086446 2632 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 02:43:26.101123 kubelet[2632]: E0120 02:43:26.086527 2632 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:43:26.101123 kubelet[2632]: E0120 02:43:26.088746 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:43:26.112911 kubelet[2632]: E0120 02:43:26.107376 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:43:26.110954 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:43:26.139159 kubelet[2632]: E0120 02:43:26.139058 2632 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:43:26.168441 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:43:26.187351 kubelet[2632]: E0120 02:43:26.187317 2632 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:43:26.193409 kubelet[2632]: E0120 02:43:26.191597 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:43:26.224522 kubelet[2632]: E0120 02:43:26.216105 2632 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:43:26.225455 kubelet[2632]: I0120 02:43:26.225428 2632 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:43:26.225655 kubelet[2632]: I0120 02:43:26.225603 2632 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:43:26.226447 kubelet[2632]: I0120 02:43:26.226428 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:43:26.262359 kubelet[2632]: E0120 02:43:26.262321 2632 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:43:26.262641 kubelet[2632]: E0120 02:43:26.262622 2632 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:43:26.272070 kubelet[2632]: E0120 02:43:26.267144 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:43:26.349636 kubelet[2632]: I0120 02:43:26.349595 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:26.350637 kubelet[2632]: E0120 02:43:26.350609 2632 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 20 02:43:26.552667 systemd[1]: Created slice kubepods-burstable-pode197405f513e0a52a9b18b708e4ceb0d.slice - libcontainer container kubepods-burstable-pode197405f513e0a52a9b18b708e4ceb0d.slice. Jan 20 02:43:26.583668 kubelet[2632]: I0120 02:43:26.571905 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:26.591162 kubelet[2632]: E0120 02:43:26.588869 2632 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 20 02:43:26.595741 kubelet[2632]: I0120 02:43:26.595303 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:43:26.595741 kubelet[2632]: I0120 02:43:26.595351 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:43:26.595741 kubelet[2632]: I0120 02:43:26.595383 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:26.595741 kubelet[2632]: I0120 02:43:26.595419 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:26.595741 kubelet[2632]: I0120 02:43:26.595441 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:26.610013 kubelet[2632]: I0120 02:43:26.595467 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:26.614117 kubelet[2632]: I0120 02:43:26.613513 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:43:26.614117 kubelet[2632]: I0120 02:43:26.613570 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:26.614117 kubelet[2632]: I0120 02:43:26.613607 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:43:26.634410 kubelet[2632]: E0120 02:43:26.628494 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:26.709685 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 02:43:26.762041 kubelet[2632]: E0120 02:43:26.757363 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:26.762041 kubelet[2632]: E0120 02:43:26.758041 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:26.774090 containerd[1621]: time="2026-01-20T02:43:26.771359923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 02:43:26.800382 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 02:43:26.827151 kubelet[2632]: E0120 02:43:26.822558 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:26.831958 kubelet[2632]: E0120 02:43:26.831526 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:26.843920 kubelet[2632]: E0120 02:43:26.838669 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Jan 20 02:43:26.844101 containerd[1621]: time="2026-01-20T02:43:26.837756860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 02:43:26.877584 kubelet[2632]: E0120 02:43:26.877098 2632 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:43:26.943112 kubelet[2632]: E0120 02:43:26.942649 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:26.943714 containerd[1621]: time="2026-01-20T02:43:26.943667917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e197405f513e0a52a9b18b708e4ceb0d,Namespace:kube-system,Attempt:0,}" Jan 20 02:43:26.955994 kubelet[2632]: E0120 02:43:26.954727 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:43:27.008620 kubelet[2632]: I0120 02:43:27.005555 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:27.032122 kubelet[2632]: E0120 02:43:27.028447 2632 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 20 02:43:27.157754 containerd[1621]: time="2026-01-20T02:43:27.157120291Z" level=info msg="connecting to shim 9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019" address="unix:///run/containerd/s/983c85fb2dd2c9692d591cf1dc5543f63911aaea5d76651232fbf65f4d7c099c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:43:27.398604 containerd[1621]: time="2026-01-20T02:43:27.395456304Z" level=info msg="connecting to shim 3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51" address="unix:///run/containerd/s/ed88eedd227c819190f8a7dea9b3674714ce14bde7a2606540435a01a1ee7cb6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:43:27.431091 containerd[1621]: time="2026-01-20T02:43:27.424173927Z" level=info msg="connecting to shim b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b" address="unix:///run/containerd/s/0d5157d638caf4fcad411d35d2621155fe0dd674621bfc4b48a276fe5bc5507c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:43:27.534950 systemd[1]: Started cri-containerd-9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019.scope - libcontainer container 9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019. Jan 20 02:43:27.845422 kubelet[2632]: I0120 02:43:27.832935 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:27.845422 kubelet[2632]: E0120 02:43:27.841374 2632 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 20 02:43:27.880680 systemd[1]: Started cri-containerd-3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51.scope - libcontainer container 3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51. Jan 20 02:43:28.062468 systemd[1]: Started cri-containerd-b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b.scope - libcontainer container b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b. Jan 20 02:43:28.446425 kubelet[2632]: E0120 02:43:28.446162 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="3.2s" Jan 20 02:43:28.520490 containerd[1621]: time="2026-01-20T02:43:28.515530742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019\"" Jan 20 02:43:28.578030 kubelet[2632]: E0120 02:43:28.577601 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:28.839136 containerd[1621]: time="2026-01-20T02:43:28.821753658Z" level=info msg="CreateContainer within sandbox \"9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:43:28.949135 kubelet[2632]: E0120 02:43:28.946498 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:43:29.037463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562071453.mount: Deactivated successfully. Jan 20 02:43:29.094567 containerd[1621]: time="2026-01-20T02:43:29.094144419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e197405f513e0a52a9b18b708e4ceb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51\"" Jan 20 02:43:29.125537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345687484.mount: Deactivated successfully. Jan 20 02:43:29.140281 kubelet[2632]: E0120 02:43:29.138578 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:29.151408 containerd[1621]: time="2026-01-20T02:43:29.146438784Z" level=info msg="Container 7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:43:29.151558 kubelet[2632]: E0120 02:43:29.147509 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:43:29.151558 kubelet[2632]: E0120 02:43:29.151496 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:43:29.181480 containerd[1621]: time="2026-01-20T02:43:29.177169174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b\"" Jan 20 02:43:29.192739 kubelet[2632]: E0120 02:43:29.192702 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:29.202394 containerd[1621]: time="2026-01-20T02:43:29.197704073Z" level=info msg="CreateContainer within sandbox \"3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:43:29.245302 kubelet[2632]: E0120 02:43:29.243538 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:43:29.273313 containerd[1621]: time="2026-01-20T02:43:29.271057908Z" level=info msg="CreateContainer within sandbox \"b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:43:29.317408 containerd[1621]: time="2026-01-20T02:43:29.307488656Z" level=info msg="CreateContainer within sandbox \"9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f\"" Jan 20 02:43:29.317408 containerd[1621]: time="2026-01-20T02:43:29.314891089Z" level=info msg="StartContainer for \"7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f\"" Jan 20 02:43:29.366747 containerd[1621]: time="2026-01-20T02:43:29.362119129Z" level=info msg="connecting to shim 7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f" address="unix:///run/containerd/s/983c85fb2dd2c9692d591cf1dc5543f63911aaea5d76651232fbf65f4d7c099c" protocol=ttrpc version=3 Jan 20 02:43:29.491749 kubelet[2632]: I0120 02:43:29.482723 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:29.510131 kubelet[2632]: E0120 02:43:29.510028 2632 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 20 02:43:29.600690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516094068.mount: Deactivated successfully. Jan 20 02:43:29.670144 containerd[1621]: time="2026-01-20T02:43:29.664086819Z" level=info msg="Container 7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:43:29.670144 containerd[1621]: time="2026-01-20T02:43:29.667947999Z" level=info msg="Container b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:43:29.767171 systemd[1]: Started cri-containerd-7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f.scope - libcontainer container 7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f. Jan 20 02:43:30.139943 containerd[1621]: time="2026-01-20T02:43:30.139534469Z" level=info msg="CreateContainer within sandbox \"3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c\"" Jan 20 02:43:30.184108 containerd[1621]: time="2026-01-20T02:43:30.173372959Z" level=info msg="StartContainer for \"b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c\"" Jan 20 02:43:30.193175 containerd[1621]: time="2026-01-20T02:43:30.187056230Z" level=info msg="connecting to shim b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c" address="unix:///run/containerd/s/ed88eedd227c819190f8a7dea9b3674714ce14bde7a2606540435a01a1ee7cb6" protocol=ttrpc version=3 Jan 20 02:43:30.205473 containerd[1621]: time="2026-01-20T02:43:30.203706863Z" level=info msg="CreateContainer within sandbox \"b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4\"" Jan 20 02:43:30.210468 containerd[1621]: time="2026-01-20T02:43:30.208931009Z" level=info msg="StartContainer for \"7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4\"" Jan 20 02:43:30.220591 containerd[1621]: time="2026-01-20T02:43:30.220099525Z" level=info msg="connecting to shim 7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4" address="unix:///run/containerd/s/0d5157d638caf4fcad411d35d2621155fe0dd674621bfc4b48a276fe5bc5507c" protocol=ttrpc version=3 Jan 20 02:43:31.629671 systemd[1]: Started cri-containerd-7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4.scope - libcontainer container 7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4. Jan 20 02:43:32.033470 containerd[1621]: time="2026-01-20T02:43:32.001702655Z" level=error msg="get state for 7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f" error="context deadline exceeded" Jan 20 02:43:32.033470 containerd[1621]: time="2026-01-20T02:43:32.014788101Z" level=warning msg="unknown status" status=0 Jan 20 02:43:32.124586 systemd[1]: Started cri-containerd-b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c.scope - libcontainer container b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c. Jan 20 02:43:32.141101 kubelet[2632]: E0120 02:43:32.128076 2632 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c5043ad77b2c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:43:25.253096136 +0000 UTC m=+5.884650506,LastTimestamp:2026-01-20 02:43:25.253096136 +0000 UTC m=+5.884650506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:43:32.479710 kubelet[2632]: E0120 02:43:32.178921 2632 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:43:32.479710 kubelet[2632]: E0120 02:43:32.179129 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="6.4s" Jan 20 02:43:32.707914 containerd[1621]: time="2026-01-20T02:43:32.707467211Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:43:33.614165 kubelet[2632]: E0120 02:43:33.613731 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:43:33.869371 kubelet[2632]: E0120 02:43:33.868639 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:43:34.101167 kubelet[2632]: I0120 02:43:34.095630 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:34.517110 kubelet[2632]: E0120 02:43:34.516618 2632 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 20 02:43:34.975497 containerd[1621]: time="2026-01-20T02:43:34.948457954Z" level=info msg="StartContainer for \"7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f\" returns successfully" Jan 20 02:43:35.081672 kubelet[2632]: E0120 02:43:35.081537 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:43:35.448699 kubelet[2632]: E0120 02:43:35.296757 2632 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:43:36.106335 containerd[1621]: time="2026-01-20T02:43:36.106146816Z" level=info msg="StartContainer for \"b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c\" returns successfully" Jan 20 02:43:36.206043 containerd[1621]: time="2026-01-20T02:43:36.205964291Z" level=info msg="StartContainer for \"7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4\" returns successfully" Jan 20 02:43:36.267038 kubelet[2632]: E0120 02:43:36.266486 2632 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:43:36.935351 kubelet[2632]: E0120 02:43:36.922634 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:36.935351 kubelet[2632]: E0120 02:43:36.928173 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:37.341061 kubelet[2632]: E0120 02:43:37.315739 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:37.353801 kubelet[2632]: E0120 02:43:37.353548 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:37.370632 kubelet[2632]: E0120 02:43:37.369993 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:37.376177 kubelet[2632]: E0120 02:43:37.376142 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:38.783788 kubelet[2632]: E0120 02:43:38.783469 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:38.820700 kubelet[2632]: E0120 02:43:38.788432 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:38.826325 kubelet[2632]: E0120 02:43:38.826007 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:38.833656 kubelet[2632]: E0120 02:43:38.789370 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:38.868378 kubelet[2632]: E0120 02:43:38.852496 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:38.934445 kubelet[2632]: E0120 02:43:38.886849 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:40.308811 kubelet[2632]: E0120 02:43:40.308148 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:40.308811 kubelet[2632]: E0120 02:43:40.308687 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:40.448128 kubelet[2632]: E0120 02:43:40.446792 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:40.470740 kubelet[2632]: E0120 02:43:40.468398 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:42.123745 kubelet[2632]: I0120 02:43:42.122840 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:43:42.986411 kubelet[2632]: E0120 02:43:42.979666 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:42.996378 kubelet[2632]: E0120 02:43:42.988085 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:42.997754 kubelet[2632]: E0120 02:43:42.997732 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:43.001453 kubelet[2632]: E0120 02:43:42.994117 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:46.271610 kubelet[2632]: E0120 02:43:46.267453 2632 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:43:47.705351 kubelet[2632]: E0120 02:43:47.702167 2632 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:43:47.705351 kubelet[2632]: E0120 02:43:47.702648 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:48.614475 kubelet[2632]: E0120 02:43:48.609622 2632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 02:43:50.363482 kubelet[2632]: I0120 02:43:50.363436 2632 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:43:50.391350 kubelet[2632]: I0120 02:43:50.390861 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:43:50.460633 kubelet[2632]: E0120 02:43:50.391789 2632 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c5043ad77b2c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:43:25.253096136 +0000 UTC m=+5.884650506,LastTimestamp:2026-01-20 02:43:25.253096136 +0000 UTC m=+5.884650506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:43:50.649495 kubelet[2632]: E0120 02:43:50.647112 2632 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 02:43:50.649495 kubelet[2632]: I0120 02:43:50.647447 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:50.678918 kubelet[2632]: E0120 02:43:50.678874 2632 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:43:50.693861 kubelet[2632]: I0120 02:43:50.691362 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:43:50.757189 kubelet[2632]: E0120 02:43:50.755707 2632 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 02:43:50.821147 kubelet[2632]: I0120 02:43:50.821073 2632 apiserver.go:52] "Watching apiserver" Jan 20 02:43:50.894145 kubelet[2632]: I0120 02:43:50.894095 2632 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:43:51.107415 kubelet[2632]: I0120 02:43:51.106804 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:43:51.249730 kubelet[2632]: E0120 02:43:51.245722 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:51.320156 kubelet[2632]: E0120 02:43:51.319747 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:01.400674 kubelet[2632]: E0120 02:44:01.399567 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:01.718341 kubelet[2632]: I0120 02:44:01.705929 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=10.70588689 podStartE2EDuration="10.70588689s" podCreationTimestamp="2026-01-20 02:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:43:56.399836984 +0000 UTC m=+37.031391324" watchObservedRunningTime="2026-01-20 02:44:01.70588689 +0000 UTC m=+42.337441230" Jan 20 02:44:02.243892 kubelet[2632]: I0120 02:44:02.236720 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:44:02.328502 systemd[1]: Reload requested from client PID 2929 ('systemctl') (unit session-8.scope)... Jan 20 02:44:02.328738 systemd[1]: Reloading... Jan 20 02:44:02.347349 kubelet[2632]: E0120 02:44:02.336732 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:03.005449 kubelet[2632]: E0120 02:44:03.005412 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:03.500350 zram_generator::config[2975]: No configuration found. Jan 20 02:44:04.055939 kubelet[2632]: E0120 02:44:04.055902 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:05.744713 systemd[1]: Reloading finished in 3408 ms. Jan 20 02:44:06.036748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:44:06.134675 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:44:06.135987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:44:06.136424 systemd[1]: kubelet.service: Consumed 7.750s CPU time, 135.1M memory peak. Jan 20 02:44:06.182884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:44:16.187652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:44:16.981629 (kubelet)[3020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:44:19.488432 kubelet[3020]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:44:19.488432 kubelet[3020]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:44:19.488432 kubelet[3020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:44:19.492048 kubelet[3020]: I0120 02:44:19.488015 3020 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:44:19.589606 kubelet[3020]: I0120 02:44:19.587632 3020 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 02:44:19.589606 kubelet[3020]: I0120 02:44:19.587758 3020 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:44:19.599102 kubelet[3020]: I0120 02:44:19.598952 3020 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:44:19.674433 kubelet[3020]: I0120 02:44:19.671165 3020 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 02:44:19.819375 kubelet[3020]: I0120 02:44:19.811780 3020 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:44:19.935611 kubelet[3020]: I0120 02:44:19.935555 3020 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:44:20.140440 kubelet[3020]: I0120 02:44:20.136959 3020 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:44:20.140440 kubelet[3020]: I0120 02:44:20.139572 3020 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:44:20.140440 kubelet[3020]: I0120 02:44:20.139632 3020 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:44:20.140440 kubelet[3020]: I0120 02:44:20.140159 3020 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:44:20.141718 kubelet[3020]: I0120 02:44:20.140173 3020 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 02:44:20.141718 kubelet[3020]: I0120 02:44:20.140528 3020 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:44:20.147465 kubelet[3020]: I0120 02:44:20.141950 3020 kubelet.go:480] "Attempting to sync node with API server" Jan 20 02:44:20.147465 kubelet[3020]: I0120 02:44:20.142169 3020 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:44:20.147465 kubelet[3020]: I0120 02:44:20.142354 3020 kubelet.go:386] "Adding apiserver pod source" Jan 20 02:44:20.147465 kubelet[3020]: I0120 02:44:20.142377 3020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:44:20.241036 kubelet[3020]: I0120 02:44:20.239554 3020 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:44:20.267574 kubelet[3020]: I0120 02:44:20.245355 3020 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:44:20.505916 kubelet[3020]: I0120 02:44:20.496841 3020 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:44:20.505916 kubelet[3020]: I0120 02:44:20.497019 3020 server.go:1289] "Started kubelet" Jan 20 02:44:20.520912 kubelet[3020]: I0120 02:44:20.513431 3020 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:44:20.520912 kubelet[3020]: I0120 02:44:20.515750 3020 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:44:20.520912 kubelet[3020]: I0120 02:44:20.515839 3020 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:44:20.527360 kubelet[3020]: I0120 02:44:20.526745 3020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:44:20.938590 kubelet[3020]: I0120 02:44:20.912374 3020 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:44:21.274595 kubelet[3020]: I0120 02:44:21.233178 3020 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:44:21.283526 kubelet[3020]: I0120 02:44:21.277098 3020 apiserver.go:52] "Watching apiserver" Jan 20 02:44:21.339105 kubelet[3020]: I0120 02:44:21.338814 3020 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:44:21.396571 kubelet[3020]: I0120 02:44:21.396509 3020 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:44:22.048506 kubelet[3020]: I0120 02:44:22.045910 3020 server.go:317] "Adding debug handlers to kubelet server" Jan 20 02:44:22.243976 kubelet[3020]: I0120 02:44:22.214995 3020 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:44:22.243976 kubelet[3020]: I0120 02:44:22.215108 3020 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:44:22.243976 kubelet[3020]: I0120 02:44:22.215864 3020 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:44:22.509189 kubelet[3020]: E0120 02:44:22.509047 3020 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:44:24.086556 kubelet[3020]: I0120 02:44:24.086368 3020 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 02:44:26.191871 kubelet[3020]: I0120 02:44:26.132986 3020 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 02:44:26.224799 kubelet[3020]: I0120 02:44:26.194699 3020 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 02:44:26.224799 kubelet[3020]: I0120 02:44:26.195108 3020 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:44:26.224799 kubelet[3020]: I0120 02:44:26.218788 3020 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 02:44:26.224799 kubelet[3020]: E0120 02:44:26.219751 3020 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:44:26.909908 kubelet[3020]: E0120 02:44:26.881562 3020 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:44:27.137605 kubelet[3020]: E0120 02:44:27.096817 3020 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:44:27.797636 kubelet[3020]: E0120 02:44:27.795403 3020 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:44:29.245122 kubelet[3020]: E0120 02:44:29.222935 3020 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:44:30.885586 kubelet[3020]: E0120 02:44:30.885514 3020 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:44:33.544192 kubelet[3020]: I0120 02:44:33.542139 3020 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:44:33.579712 kubelet[3020]: I0120 02:44:33.578699 3020 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:44:33.579712 kubelet[3020]: I0120 02:44:33.578901 3020 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:44:33.593171 kubelet[3020]: I0120 02:44:33.582948 3020 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:44:33.593171 kubelet[3020]: I0120 02:44:33.582975 3020 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:44:33.593171 kubelet[3020]: I0120 02:44:33.584112 3020 policy_none.go:49] "None policy: Start" Jan 20 02:44:33.593171 kubelet[3020]: I0120 02:44:33.584297 3020 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:44:33.593171 kubelet[3020]: I0120 02:44:33.584322 3020 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:44:33.596697 kubelet[3020]: I0120 02:44:33.596092 3020 state_mem.go:75] "Updated machine memory state" Jan 20 02:44:33.684777 kubelet[3020]: E0120 02:44:33.680910 3020 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:44:33.695621 kubelet[3020]: I0120 02:44:33.694484 3020 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:44:33.695621 kubelet[3020]: I0120 02:44:33.694722 3020 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:44:33.704317 kubelet[3020]: I0120 02:44:33.698585 3020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:44:33.719397 kubelet[3020]: E0120 02:44:33.715451 3020 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:44:33.719397 kubelet[3020]: I0120 02:44:33.718139 3020 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:44:33.726645 containerd[1621]: time="2026-01-20T02:44:33.726419551Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:44:33.733508 kubelet[3020]: I0120 02:44:33.731482 3020 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:44:33.918858 kubelet[3020]: I0120 02:44:33.917937 3020 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:44:34.001579 kubelet[3020]: I0120 02:44:33.999539 3020 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:44:34.001579 kubelet[3020]: I0120 02:44:33.999653 3020 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:44:34.166623 kubelet[3020]: I0120 02:44:34.160533 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbeb6911-a068-434e-b56e-5ca4b7de858a-xtables-lock\") pod \"kube-proxy-b5s6w\" (UID: \"fbeb6911-a068-434e-b56e-5ca4b7de858a\") " pod="kube-system/kube-proxy-b5s6w" Jan 20 02:44:34.166623 kubelet[3020]: I0120 02:44:34.160607 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbeb6911-a068-434e-b56e-5ca4b7de858a-lib-modules\") pod \"kube-proxy-b5s6w\" (UID: \"fbeb6911-a068-434e-b56e-5ca4b7de858a\") " pod="kube-system/kube-proxy-b5s6w" Jan 20 02:44:34.166623 kubelet[3020]: I0120 02:44:34.160720 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnqv\" (UniqueName: \"kubernetes.io/projected/fbeb6911-a068-434e-b56e-5ca4b7de858a-kube-api-access-hxnqv\") pod \"kube-proxy-b5s6w\" (UID: \"fbeb6911-a068-434e-b56e-5ca4b7de858a\") " pod="kube-system/kube-proxy-b5s6w" Jan 20 02:44:34.166623 kubelet[3020]: I0120 02:44:34.160745 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbeb6911-a068-434e-b56e-5ca4b7de858a-kube-proxy\") pod \"kube-proxy-b5s6w\" (UID: \"fbeb6911-a068-434e-b56e-5ca4b7de858a\") " pod="kube-system/kube-proxy-b5s6w" Jan 20 02:44:34.178095 kubelet[3020]: I0120 02:44:34.177171 3020 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:44:34.226468 kubelet[3020]: I0120 02:44:34.225666 3020 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:44:34.261809 kubelet[3020]: I0120 02:44:34.261775 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:44:34.273042 kubelet[3020]: I0120 02:44:34.272946 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:44:34.274585 systemd[1]: Created slice kubepods-besteffort-podfbeb6911_a068_434e_b56e_5ca4b7de858a.slice - libcontainer container kubepods-besteffort-podfbeb6911_a068_434e_b56e_5ca4b7de858a.slice. Jan 20 02:44:34.322901 kubelet[3020]: I0120 02:44:34.281189 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:44:34.322901 kubelet[3020]: I0120 02:44:34.319144 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:44:34.322901 kubelet[3020]: I0120 02:44:34.319183 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:44:34.322901 kubelet[3020]: I0120 02:44:34.319392 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:44:34.322901 kubelet[3020]: I0120 02:44:34.319436 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:44:34.324077 kubelet[3020]: I0120 02:44:34.319465 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:44:34.324077 kubelet[3020]: I0120 02:44:34.319514 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:44:34.515100 kubelet[3020]: E0120 02:44:34.496137 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.515100 kubelet[3020]: E0120 02:44:34.497571 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.586494 kubelet[3020]: I0120 02:44:34.586423 3020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.585188918 podStartE2EDuration="585.188918ms" podCreationTimestamp="2026-01-20 02:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:44:34.58516478 +0000 UTC m=+16.795173673" watchObservedRunningTime="2026-01-20 02:44:34.585188918 +0000 UTC m=+16.795197821" Jan 20 02:44:34.658328 kubelet[3020]: E0120 02:44:34.638684 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.658328 kubelet[3020]: E0120 02:44:34.640947 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.668970 containerd[1621]: time="2026-01-20T02:44:34.668820167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5s6w,Uid:fbeb6911-a068-434e-b56e-5ca4b7de858a,Namespace:kube-system,Attempt:0,}" Jan 20 02:44:34.678310 kubelet[3020]: E0120 02:44:34.676760 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.678310 kubelet[3020]: E0120 02:44:34.677881 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:35.211424 containerd[1621]: time="2026-01-20T02:44:35.211145736Z" level=info msg="connecting to shim 4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55" address="unix:///run/containerd/s/d2477d7d1420d12cd392d9af8441cb03752fc885aa18be9169f73d192b2cc5d0" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:44:35.742780 kubelet[3020]: E0120 02:44:35.742740 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:36.183933 systemd[1]: Started cri-containerd-4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55.scope - libcontainer container 4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55. Jan 20 02:44:36.624681 containerd[1621]: time="2026-01-20T02:44:36.624485273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5s6w,Uid:fbeb6911-a068-434e-b56e-5ca4b7de858a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55\"" Jan 20 02:44:36.631981 kubelet[3020]: E0120 02:44:36.631945 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:36.681042 containerd[1621]: time="2026-01-20T02:44:36.678841783Z" level=info msg="CreateContainer within sandbox \"4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:44:36.766625 kubelet[3020]: E0120 02:44:36.758074 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:36.846743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060765814.mount: Deactivated successfully. Jan 20 02:44:36.884096 containerd[1621]: time="2026-01-20T02:44:36.881817878Z" level=info msg="Container 1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:44:36.937711 containerd[1621]: time="2026-01-20T02:44:36.932335174Z" level=info msg="CreateContainer within sandbox \"4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc\"" Jan 20 02:44:36.948800 containerd[1621]: time="2026-01-20T02:44:36.948740385Z" level=info msg="StartContainer for \"1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc\"" Jan 20 02:44:36.966064 containerd[1621]: time="2026-01-20T02:44:36.965765243Z" level=info msg="connecting to shim 1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc" address="unix:///run/containerd/s/d2477d7d1420d12cd392d9af8441cb03752fc885aa18be9169f73d192b2cc5d0" protocol=ttrpc version=3 Jan 20 02:44:37.201655 systemd[1]: Started cri-containerd-1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc.scope - libcontainer container 1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc. Jan 20 02:44:38.239041 kubelet[3020]: I0120 02:44:38.238941 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9b56b98f-9e1f-4f03-8afb-9c522029c146-cni-plugin\") pod \"kube-flannel-ds-8jbn6\" (UID: \"9b56b98f-9e1f-4f03-8afb-9c522029c146\") " pod="kube-flannel/kube-flannel-ds-8jbn6" Jan 20 02:44:38.240139 kubelet[3020]: I0120 02:44:38.239049 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9b56b98f-9e1f-4f03-8afb-9c522029c146-flannel-cfg\") pod \"kube-flannel-ds-8jbn6\" (UID: \"9b56b98f-9e1f-4f03-8afb-9c522029c146\") " pod="kube-flannel/kube-flannel-ds-8jbn6" Jan 20 02:44:38.240139 kubelet[3020]: I0120 02:44:38.239079 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9b56b98f-9e1f-4f03-8afb-9c522029c146-run\") pod \"kube-flannel-ds-8jbn6\" (UID: \"9b56b98f-9e1f-4f03-8afb-9c522029c146\") " pod="kube-flannel/kube-flannel-ds-8jbn6" Jan 20 02:44:38.240139 kubelet[3020]: I0120 02:44:38.239100 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b56b98f-9e1f-4f03-8afb-9c522029c146-xtables-lock\") pod \"kube-flannel-ds-8jbn6\" (UID: \"9b56b98f-9e1f-4f03-8afb-9c522029c146\") " pod="kube-flannel/kube-flannel-ds-8jbn6" Jan 20 02:44:38.240139 kubelet[3020]: I0120 02:44:38.239133 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9b56b98f-9e1f-4f03-8afb-9c522029c146-cni\") pod \"kube-flannel-ds-8jbn6\" (UID: \"9b56b98f-9e1f-4f03-8afb-9c522029c146\") " pod="kube-flannel/kube-flannel-ds-8jbn6" Jan 20 02:44:38.240139 kubelet[3020]: I0120 02:44:38.239155 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f5cb\" (UniqueName: \"kubernetes.io/projected/9b56b98f-9e1f-4f03-8afb-9c522029c146-kube-api-access-5f5cb\") pod \"kube-flannel-ds-8jbn6\" (UID: \"9b56b98f-9e1f-4f03-8afb-9c522029c146\") " pod="kube-flannel/kube-flannel-ds-8jbn6" Jan 20 02:44:38.279079 systemd[1]: Created slice kubepods-burstable-pod9b56b98f_9e1f_4f03_8afb_9c522029c146.slice - libcontainer container kubepods-burstable-pod9b56b98f_9e1f_4f03_8afb_9c522029c146.slice. Jan 20 02:44:38.420343 containerd[1621]: time="2026-01-20T02:44:38.412312486Z" level=info msg="StartContainer for \"1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc\" returns successfully" Jan 20 02:44:38.718738 kubelet[3020]: E0120 02:44:38.705088 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:38.756162 containerd[1621]: time="2026-01-20T02:44:38.742956719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-8jbn6,Uid:9b56b98f-9e1f-4f03-8afb-9c522029c146,Namespace:kube-flannel,Attempt:0,}" Jan 20 02:44:38.909999 kubelet[3020]: E0120 02:44:38.908996 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:39.069081 kubelet[3020]: I0120 02:44:39.054144 3020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5s6w" podStartSLOduration=20.052927183 podStartE2EDuration="20.052927183s" podCreationTimestamp="2026-01-20 02:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:44:39.045707429 +0000 UTC m=+21.255716332" watchObservedRunningTime="2026-01-20 02:44:39.052927183 +0000 UTC m=+21.262936066" Jan 20 02:44:39.117588 containerd[1621]: time="2026-01-20T02:44:39.116983123Z" level=info msg="connecting to shim 3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1" address="unix:///run/containerd/s/13165b7a9541d2e828e633d0fbfbd926829a4fc431cc0dda7f21949fd37cca8a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:44:39.397286 systemd[1]: Started cri-containerd-3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1.scope - libcontainer container 3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1. Jan 20 02:44:39.585593 sudo[1812]: pam_unix(sudo:session): session closed for user root Jan 20 02:44:39.661123 sshd[1811]: Connection closed by 10.0.0.1 port 35328 Jan 20 02:44:39.659063 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:39.694623 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:35328.service: Deactivated successfully. Jan 20 02:44:39.737093 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:44:39.758535 systemd[1]: session-8.scope: Consumed 23.883s CPU time, 217.7M memory peak. Jan 20 02:44:39.781294 systemd-logind[1588]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:44:39.803002 systemd-logind[1588]: Removed session 8. Jan 20 02:44:40.022637 containerd[1621]: time="2026-01-20T02:44:40.011318717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-8jbn6,Uid:9b56b98f-9e1f-4f03-8afb-9c522029c146,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\"" Jan 20 02:44:40.023666 kubelet[3020]: E0120 02:44:40.016099 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:40.041638 kubelet[3020]: E0120 02:44:40.038163 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:40.050013 containerd[1621]: time="2026-01-20T02:44:40.049969605Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 20 02:44:43.691356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570212192.mount: Deactivated successfully. Jan 20 02:44:44.614779 kubelet[3020]: E0120 02:44:44.603648 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:44.669173 kubelet[3020]: E0120 02:44:44.634051 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:44.751108 containerd[1621]: time="2026-01-20T02:44:44.742042930Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:44.820492 containerd[1621]: time="2026-01-20T02:44:44.764839069Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=3646925" Jan 20 02:44:44.820492 containerd[1621]: time="2026-01-20T02:44:44.771990916Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:44.829861 kubelet[3020]: E0120 02:44:44.829156 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:44.834137 containerd[1621]: time="2026-01-20T02:44:44.832818136Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:44.834733 containerd[1621]: time="2026-01-20T02:44:44.834632457Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 4.781618302s" Jan 20 02:44:44.834733 containerd[1621]: time="2026-01-20T02:44:44.834721023Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 20 02:44:44.918875 containerd[1621]: time="2026-01-20T02:44:44.918736465Z" level=info msg="CreateContainer within sandbox \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 02:44:45.081148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345808270.mount: Deactivated successfully. Jan 20 02:44:45.124069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674252925.mount: Deactivated successfully. Jan 20 02:44:45.146397 containerd[1621]: time="2026-01-20T02:44:45.145413779Z" level=info msg="Container de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:44:45.232622 containerd[1621]: time="2026-01-20T02:44:45.232045244Z" level=info msg="CreateContainer within sandbox \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3\"" Jan 20 02:44:45.254329 containerd[1621]: time="2026-01-20T02:44:45.253716631Z" level=info msg="StartContainer for \"de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3\"" Jan 20 02:44:45.291398 containerd[1621]: time="2026-01-20T02:44:45.291159534Z" level=info msg="connecting to shim de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3" address="unix:///run/containerd/s/13165b7a9541d2e828e633d0fbfbd926829a4fc431cc0dda7f21949fd37cca8a" protocol=ttrpc version=3 Jan 20 02:44:45.362363 kubelet[3020]: E0120 02:44:45.361737 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:45.372994 kubelet[3020]: E0120 02:44:45.372888 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:45.644648 systemd[1]: Started cri-containerd-de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3.scope - libcontainer container de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3. Jan 20 02:44:46.136942 systemd[1]: cri-containerd-de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3.scope: Deactivated successfully. Jan 20 02:44:46.148307 containerd[1621]: time="2026-01-20T02:44:46.148146669Z" level=info msg="StartContainer for \"de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3\" returns successfully" Jan 20 02:44:46.164408 containerd[1621]: time="2026-01-20T02:44:46.157657476Z" level=info msg="received container exit event container_id:\"de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3\" id:\"de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3\" pid:3401 exited_at:{seconds:1768877086 nanos:151823476}" Jan 20 02:44:46.433342 kubelet[3020]: E0120 02:44:46.428082 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:46.817991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3-rootfs.mount: Deactivated successfully. Jan 20 02:44:47.477987 kubelet[3020]: E0120 02:44:47.477934 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:47.490983 containerd[1621]: time="2026-01-20T02:44:47.489961315Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 20 02:45:02.686359 containerd[1621]: time="2026-01-20T02:45:02.680374047Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:02.715026 containerd[1621]: time="2026-01-20T02:45:02.701333427Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29350893" Jan 20 02:45:02.715026 containerd[1621]: time="2026-01-20T02:45:02.708072043Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:02.744893 containerd[1621]: time="2026-01-20T02:45:02.744735173Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:02.754182 containerd[1621]: time="2026-01-20T02:45:02.748780245Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 15.258766621s" Jan 20 02:45:02.754182 containerd[1621]: time="2026-01-20T02:45:02.748818946Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 20 02:45:02.788380 containerd[1621]: time="2026-01-20T02:45:02.784083304Z" level=info msg="CreateContainer within sandbox \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 02:45:02.886357 containerd[1621]: time="2026-01-20T02:45:02.885507794Z" level=info msg="Container 75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:45:02.934569 containerd[1621]: time="2026-01-20T02:45:02.933840920Z" level=info msg="CreateContainer within sandbox \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652\"" Jan 20 02:45:02.941114 containerd[1621]: time="2026-01-20T02:45:02.940880690Z" level=info msg="StartContainer for \"75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652\"" Jan 20 02:45:02.988265 containerd[1621]: time="2026-01-20T02:45:02.986797130Z" level=info msg="connecting to shim 75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652" address="unix:///run/containerd/s/13165b7a9541d2e828e633d0fbfbd926829a4fc431cc0dda7f21949fd37cca8a" protocol=ttrpc version=3 Jan 20 02:45:03.311948 systemd[1]: Started cri-containerd-75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652.scope - libcontainer container 75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652. Jan 20 02:45:03.811148 systemd[1]: cri-containerd-75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652.scope: Deactivated successfully. Jan 20 02:45:03.906072 containerd[1621]: time="2026-01-20T02:45:03.900731130Z" level=info msg="received container exit event container_id:\"75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652\" id:\"75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652\" pid:3512 exited_at:{seconds:1768877103 nanos:833736810}" Jan 20 02:45:03.922847 containerd[1621]: time="2026-01-20T02:45:03.919180981Z" level=info msg="StartContainer for \"75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652\" returns successfully" Jan 20 02:45:03.946152 kubelet[3020]: I0120 02:45:03.945838 3020 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 02:45:04.689877 kubelet[3020]: I0120 02:45:04.688101 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4d72161-4dd0-42ac-a192-0412faafbd17-config-volume\") pod \"coredns-674b8bbfcf-lk4k7\" (UID: \"b4d72161-4dd0-42ac-a192-0412faafbd17\") " pod="kube-system/coredns-674b8bbfcf-lk4k7" Jan 20 02:45:04.689877 kubelet[3020]: I0120 02:45:04.688328 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kf8\" (UniqueName: \"kubernetes.io/projected/b4d72161-4dd0-42ac-a192-0412faafbd17-kube-api-access-b9kf8\") pod \"coredns-674b8bbfcf-lk4k7\" (UID: \"b4d72161-4dd0-42ac-a192-0412faafbd17\") " pod="kube-system/coredns-674b8bbfcf-lk4k7" Jan 20 02:45:04.729307 kubelet[3020]: E0120 02:45:04.727130 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:04.909534 kubelet[3020]: I0120 02:45:04.890300 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4328a70-2c28-43d9-a1fb-05ef64cfaba8-config-volume\") pod \"coredns-674b8bbfcf-tgvsj\" (UID: \"c4328a70-2c28-43d9-a1fb-05ef64cfaba8\") " pod="kube-system/coredns-674b8bbfcf-tgvsj" Jan 20 02:45:04.909534 kubelet[3020]: I0120 02:45:04.890518 3020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmbs\" (UniqueName: \"kubernetes.io/projected/c4328a70-2c28-43d9-a1fb-05ef64cfaba8-kube-api-access-8rmbs\") pod \"coredns-674b8bbfcf-tgvsj\" (UID: \"c4328a70-2c28-43d9-a1fb-05ef64cfaba8\") " pod="kube-system/coredns-674b8bbfcf-tgvsj" Jan 20 02:45:04.896476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652-rootfs.mount: Deactivated successfully. Jan 20 02:45:04.929901 systemd[1]: Created slice kubepods-burstable-podc4328a70_2c28_43d9_a1fb_05ef64cfaba8.slice - libcontainer container kubepods-burstable-podc4328a70_2c28_43d9_a1fb_05ef64cfaba8.slice. Jan 20 02:45:05.099759 systemd[1]: Created slice kubepods-burstable-podb4d72161_4dd0_42ac_a192_0412faafbd17.slice - libcontainer container kubepods-burstable-podb4d72161_4dd0_42ac_a192_0412faafbd17.slice. Jan 20 02:45:05.267846 kubelet[3020]: E0120 02:45:05.247791 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:05.302483 containerd[1621]: time="2026-01-20T02:45:05.302437585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lk4k7,Uid:b4d72161-4dd0-42ac-a192-0412faafbd17,Namespace:kube-system,Attempt:0,}" Jan 20 02:45:05.385030 kubelet[3020]: E0120 02:45:05.381388 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:05.396124 containerd[1621]: time="2026-01-20T02:45:05.395715729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tgvsj,Uid:c4328a70-2c28-43d9-a1fb-05ef64cfaba8,Namespace:kube-system,Attempt:0,}" Jan 20 02:45:05.815852 kubelet[3020]: E0120 02:45:05.812549 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:05.891584 containerd[1621]: time="2026-01-20T02:45:05.891528181Z" level=info msg="CreateContainer within sandbox \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 02:45:06.058074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921236107.mount: Deactivated successfully. Jan 20 02:45:06.140850 containerd[1621]: time="2026-01-20T02:45:06.137462148Z" level=info msg="Container d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:45:06.212772 systemd[1]: run-netns-cni\x2d7420353e\x2d4264\x2daa29\x2def6a\x2da08a5bafce4c.mount: Deactivated successfully. Jan 20 02:45:06.238686 systemd[1]: run-netns-cni\x2de8b8ea85\x2d8414\x2dc458\x2d0c18\x2d715fa018afa1.mount: Deactivated successfully. Jan 20 02:45:06.287982 containerd[1621]: time="2026-01-20T02:45:06.277585085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tgvsj,Uid:c4328a70-2c28-43d9-a1fb-05ef64cfaba8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ecb6b771db4bdd4ed62435566d285ab20f6461ee0fdf36c55ce10020c9ff7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:45:06.303104 containerd[1621]: time="2026-01-20T02:45:06.303045218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lk4k7,Uid:b4d72161-4dd0-42ac-a192-0412faafbd17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43d6d6a2ba237543dee83d0af81aea67afcb1a4a7d7f9c83ff8820d0789171f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:45:06.314878 kubelet[3020]: E0120 02:45:06.311883 3020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ecb6b771db4bdd4ed62435566d285ab20f6461ee0fdf36c55ce10020c9ff7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:45:06.314878 kubelet[3020]: E0120 02:45:06.312419 3020 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ecb6b771db4bdd4ed62435566d285ab20f6461ee0fdf36c55ce10020c9ff7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-tgvsj" Jan 20 02:45:06.314878 kubelet[3020]: E0120 02:45:06.312546 3020 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ecb6b771db4bdd4ed62435566d285ab20f6461ee0fdf36c55ce10020c9ff7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-tgvsj" Jan 20 02:45:06.316127 containerd[1621]: time="2026-01-20T02:45:06.306452026Z" level=info msg="CreateContainer within sandbox \"3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a\"" Jan 20 02:45:06.316796 kubelet[3020]: E0120 02:45:06.316572 3020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43d6d6a2ba237543dee83d0af81aea67afcb1a4a7d7f9c83ff8820d0789171f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:45:06.316796 kubelet[3020]: E0120 02:45:06.316787 3020 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43d6d6a2ba237543dee83d0af81aea67afcb1a4a7d7f9c83ff8820d0789171f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-lk4k7" Jan 20 02:45:06.316900 kubelet[3020]: E0120 02:45:06.316817 3020 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43d6d6a2ba237543dee83d0af81aea67afcb1a4a7d7f9c83ff8820d0789171f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-lk4k7" Jan 20 02:45:06.317086 kubelet[3020]: E0120 02:45:06.316936 3020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lk4k7_kube-system(b4d72161-4dd0-42ac-a192-0412faafbd17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lk4k7_kube-system(b4d72161-4dd0-42ac-a192-0412faafbd17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b43d6d6a2ba237543dee83d0af81aea67afcb1a4a7d7f9c83ff8820d0789171f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-lk4k7" podUID="b4d72161-4dd0-42ac-a192-0412faafbd17" Jan 20 02:45:06.321869 kubelet[3020]: E0120 02:45:06.320924 3020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tgvsj_kube-system(c4328a70-2c28-43d9-a1fb-05ef64cfaba8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tgvsj_kube-system(c4328a70-2c28-43d9-a1fb-05ef64cfaba8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6ecb6b771db4bdd4ed62435566d285ab20f6461ee0fdf36c55ce10020c9ff7b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-tgvsj" podUID="c4328a70-2c28-43d9-a1fb-05ef64cfaba8" Jan 20 02:45:06.329508 containerd[1621]: time="2026-01-20T02:45:06.326930450Z" level=info msg="StartContainer for \"d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a\"" Jan 20 02:45:06.348877 containerd[1621]: time="2026-01-20T02:45:06.348316136Z" level=info msg="connecting to shim d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a" address="unix:///run/containerd/s/13165b7a9541d2e828e633d0fbfbd926829a4fc431cc0dda7f21949fd37cca8a" protocol=ttrpc version=3 Jan 20 02:45:06.722731 systemd[1]: Started cri-containerd-d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a.scope - libcontainer container d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a. Jan 20 02:45:07.139850 containerd[1621]: time="2026-01-20T02:45:07.139515581Z" level=info msg="StartContainer for \"d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a\" returns successfully" Jan 20 02:45:07.920387 kubelet[3020]: E0120 02:45:07.920342 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:08.655569 systemd-networkd[1517]: flannel.1: Link UP Jan 20 02:45:08.655585 systemd-networkd[1517]: flannel.1: Gained carrier Jan 20 02:45:08.944142 kubelet[3020]: E0120 02:45:08.943565 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:09.825832 systemd-networkd[1517]: flannel.1: Gained IPv6LL Jan 20 02:45:20.227110 kubelet[3020]: E0120 02:45:20.227063 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:20.245660 containerd[1621]: time="2026-01-20T02:45:20.245615211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tgvsj,Uid:c4328a70-2c28-43d9-a1fb-05ef64cfaba8,Namespace:kube-system,Attempt:0,}" Jan 20 02:45:20.518128 systemd-networkd[1517]: cni0: Link UP Jan 20 02:45:20.518142 systemd-networkd[1517]: cni0: Gained carrier Jan 20 02:45:20.558642 systemd-networkd[1517]: cni0: Lost carrier Jan 20 02:45:20.905032 systemd-networkd[1517]: veth1aec07c9: Link UP Jan 20 02:45:20.943018 kernel: cni0: port 1(veth1aec07c9) entered blocking state Jan 20 02:45:20.943341 kernel: cni0: port 1(veth1aec07c9) entered disabled state Jan 20 02:45:20.943385 kernel: veth1aec07c9: entered allmulticast mode Jan 20 02:45:20.972584 kernel: veth1aec07c9: entered promiscuous mode Jan 20 02:45:21.134028 kernel: cni0: port 1(veth1aec07c9) entered blocking state Jan 20 02:45:21.134191 kernel: cni0: port 1(veth1aec07c9) entered forwarding state Jan 20 02:45:21.136188 systemd-networkd[1517]: veth1aec07c9: Gained carrier Jan 20 02:45:21.137514 systemd-networkd[1517]: cni0: Gained carrier Jan 20 02:45:21.210101 containerd[1621]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008a950), "name":"cbr0", "type":"bridge"} Jan 20 02:45:21.210101 containerd[1621]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:45:21.247925 kubelet[3020]: E0120 02:45:21.244048 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:21.257904 containerd[1621]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:45:21.250566425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lk4k7,Uid:b4d72161-4dd0-42ac-a192-0412faafbd17,Namespace:kube-system,Attempt:0,}" Jan 20 02:45:21.549498 systemd-networkd[1517]: veth1de327ad: Link UP Jan 20 02:45:21.591022 kernel: cni0: port 2(veth1de327ad) entered blocking state Jan 20 02:45:21.591133 kernel: cni0: port 2(veth1de327ad) entered disabled state Jan 20 02:45:21.591166 kernel: veth1de327ad: entered allmulticast mode Jan 20 02:45:21.611772 kernel: veth1de327ad: entered promiscuous mode Jan 20 02:45:21.737383 systemd-networkd[1517]: cni0: Gained IPv6LL Jan 20 02:45:21.881897 containerd[1621]: time="2026-01-20T02:45:21.879350742Z" level=info msg="connecting to shim 847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3" address="unix:///run/containerd/s/13f320b790ceccf4e261c1ec6606bf6741cb7162ba5e52ec97cdc7acc3111b15" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:45:21.902326 kernel: cni0: port 2(veth1de327ad) entered blocking state Jan 20 02:45:21.902481 kernel: cni0: port 2(veth1de327ad) entered forwarding state Jan 20 02:45:21.894879 systemd-networkd[1517]: veth1de327ad: Gained carrier Jan 20 02:45:21.958952 containerd[1621]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Jan 20 02:45:21.958952 containerd[1621]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:45:22.330960 containerd[1621]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:45:22.330574135Z" level=info msg="connecting to shim 2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a" address="unix:///run/containerd/s/2de3e42cae8f863ba606d5cb86f27953c9a4bb55e241a9906a2c72c7440bd362" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:45:22.516363 systemd[1]: Started cri-containerd-847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3.scope - libcontainer container 847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3. Jan 20 02:45:22.773448 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:45:22.789589 systemd[1]: Started cri-containerd-2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a.scope - libcontainer container 2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a. Jan 20 02:45:23.015511 systemd-networkd[1517]: veth1aec07c9: Gained IPv6LL Jan 20 02:45:23.042359 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:45:23.401635 systemd-networkd[1517]: veth1de327ad: Gained IPv6LL Jan 20 02:45:23.910326 containerd[1621]: time="2026-01-20T02:45:23.909405272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tgvsj,Uid:c4328a70-2c28-43d9-a1fb-05ef64cfaba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3\"" Jan 20 02:45:23.982985 kubelet[3020]: E0120 02:45:23.966029 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:24.122301 containerd[1621]: time="2026-01-20T02:45:24.120085437Z" level=info msg="CreateContainer within sandbox \"847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:45:24.417708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617713035.mount: Deactivated successfully. Jan 20 02:45:24.446072 containerd[1621]: time="2026-01-20T02:45:24.444388327Z" level=info msg="Container bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:45:24.471618 containerd[1621]: time="2026-01-20T02:45:24.471498187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lk4k7,Uid:b4d72161-4dd0-42ac-a192-0412faafbd17,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a\"" Jan 20 02:45:24.577665 kubelet[3020]: E0120 02:45:24.555382 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:24.691020 containerd[1621]: time="2026-01-20T02:45:24.690906673Z" level=info msg="CreateContainer within sandbox \"2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:45:24.748097 containerd[1621]: time="2026-01-20T02:45:24.746937486Z" level=info msg="CreateContainer within sandbox \"847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d\"" Jan 20 02:45:24.776005 containerd[1621]: time="2026-01-20T02:45:24.773567035Z" level=info msg="StartContainer for \"bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d\"" Jan 20 02:45:24.789348 containerd[1621]: time="2026-01-20T02:45:24.789116929Z" level=info msg="connecting to shim bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d" address="unix:///run/containerd/s/13f320b790ceccf4e261c1ec6606bf6741cb7162ba5e52ec97cdc7acc3111b15" protocol=ttrpc version=3 Jan 20 02:45:24.874050 containerd[1621]: time="2026-01-20T02:45:24.873999816Z" level=info msg="Container 9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:45:25.035451 containerd[1621]: time="2026-01-20T02:45:25.034991065Z" level=info msg="CreateContainer within sandbox \"2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c\"" Jan 20 02:45:25.067983 containerd[1621]: time="2026-01-20T02:45:25.067531188Z" level=info msg="StartContainer for \"9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c\"" Jan 20 02:45:25.095973 containerd[1621]: time="2026-01-20T02:45:25.087706625Z" level=info msg="connecting to shim 9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c" address="unix:///run/containerd/s/2de3e42cae8f863ba606d5cb86f27953c9a4bb55e241a9906a2c72c7440bd362" protocol=ttrpc version=3 Jan 20 02:45:25.100625 systemd[1]: Started cri-containerd-bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d.scope - libcontainer container bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d. Jan 20 02:45:25.391486 systemd[1]: Started cri-containerd-9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c.scope - libcontainer container 9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c. Jan 20 02:45:25.645088 containerd[1621]: time="2026-01-20T02:45:25.641515967Z" level=info msg="StartContainer for \"bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d\" returns successfully" Jan 20 02:45:26.131510 containerd[1621]: time="2026-01-20T02:45:26.131456882Z" level=info msg="StartContainer for \"9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c\" returns successfully" Jan 20 02:45:26.785633 kubelet[3020]: E0120 02:45:26.784930 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:26.873073 kubelet[3020]: E0120 02:45:26.870360 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:26.935365 kubelet[3020]: I0120 02:45:26.934503 3020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-8jbn6" podStartSLOduration=27.232826517 podStartE2EDuration="49.934481945s" podCreationTimestamp="2026-01-20 02:44:37 +0000 UTC" firstStartedPulling="2026-01-20 02:44:40.048755818 +0000 UTC m=+22.258764701" lastFinishedPulling="2026-01-20 02:45:02.750411246 +0000 UTC m=+44.960420129" observedRunningTime="2026-01-20 02:45:08.066061302 +0000 UTC m=+50.276070205" watchObservedRunningTime="2026-01-20 02:45:26.934481945 +0000 UTC m=+69.144490828" Jan 20 02:45:27.051065 kubelet[3020]: I0120 02:45:27.050307 3020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tgvsj" podStartSLOduration=68.044565207 podStartE2EDuration="1m8.044565207s" podCreationTimestamp="2026-01-20 02:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:45:26.953491218 +0000 UTC m=+69.163500121" watchObservedRunningTime="2026-01-20 02:45:27.044565207 +0000 UTC m=+69.254574119" Jan 20 02:45:27.878421 kubelet[3020]: E0120 02:45:27.877593 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:27.881912 kubelet[3020]: E0120 02:45:27.880577 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:28.912502 kubelet[3020]: E0120 02:45:28.908542 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:28.912502 kubelet[3020]: E0120 02:45:28.910049 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:48.244062 kubelet[3020]: E0120 02:45:48.238526 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:53.792651 kubelet[3020]: E0120 02:45:53.792595 3020 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.454s" Jan 20 02:45:53.824022 kubelet[3020]: E0120 02:45:53.823905 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:55.226339 kubelet[3020]: E0120 02:45:55.223925 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:57.783403 kubelet[3020]: E0120 02:45:57.782465 3020 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.269s" Jan 20 02:46:10.289524 kubelet[3020]: E0120 02:46:10.289479 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:16.225858 kubelet[3020]: E0120 02:46:16.221337 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:35.234614 kubelet[3020]: E0120 02:46:35.228999 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:45.243759 kubelet[3020]: E0120 02:46:45.230919 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:53.225341 kubelet[3020]: E0120 02:46:53.224696 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:03.225087 kubelet[3020]: E0120 02:47:03.223638 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:04.263756 kubelet[3020]: E0120 02:47:04.263608 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:22.234347 kubelet[3020]: E0120 02:47:22.228338 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:29.229331 kubelet[3020]: E0120 02:47:29.227711 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:52.247122 kubelet[3020]: E0120 02:47:52.246935 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:06.267732 kubelet[3020]: E0120 02:48:06.257423 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:09.236407 kubelet[3020]: E0120 02:48:09.233752 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:12.256415 kubelet[3020]: E0120 02:48:12.234651 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:13.257552 kubelet[3020]: E0120 02:48:13.257441 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:28.518772 containerd[1621]: time="2026-01-20T02:48:28.516512697Z" level=info msg="container event discarded" container=9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019 type=CONTAINER_CREATED_EVENT Jan 20 02:48:28.518772 containerd[1621]: time="2026-01-20T02:48:28.516647829Z" level=info msg="container event discarded" container=9b69fcdc1cb8a863a40c52a5c1a0c30ca8e2ba20ff79707f0916207d8b50f019 type=CONTAINER_STARTED_EVENT Jan 20 02:48:29.103349 containerd[1621]: time="2026-01-20T02:48:29.102748313Z" level=info msg="container event discarded" container=3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51 type=CONTAINER_CREATED_EVENT Jan 20 02:48:29.103349 containerd[1621]: time="2026-01-20T02:48:29.102811841Z" level=info msg="container event discarded" container=3649210a3034f55cf0f1e772a6aa0332454c5d933f5bf1747016c19aa026fa51 type=CONTAINER_STARTED_EVENT Jan 20 02:48:29.194431 containerd[1621]: time="2026-01-20T02:48:29.194305077Z" level=info msg="container event discarded" container=b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b type=CONTAINER_CREATED_EVENT Jan 20 02:48:29.194431 containerd[1621]: time="2026-01-20T02:48:29.194368907Z" level=info msg="container event discarded" container=b2a933e558952275d9b2591f4c2ac26373170f77c13e2503dd7d8c58b1b5e35b type=CONTAINER_STARTED_EVENT Jan 20 02:48:29.288099 containerd[1621]: time="2026-01-20T02:48:29.288012084Z" level=info msg="container event discarded" container=7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f type=CONTAINER_CREATED_EVENT Jan 20 02:48:30.111631 containerd[1621]: time="2026-01-20T02:48:30.104171281Z" level=info msg="container event discarded" container=b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c type=CONTAINER_CREATED_EVENT Jan 20 02:48:30.181847 containerd[1621]: time="2026-01-20T02:48:30.176811369Z" level=info msg="container event discarded" container=7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4 type=CONTAINER_CREATED_EVENT Jan 20 02:48:34.306704 kubelet[3020]: E0120 02:48:34.289593 3020 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.058s" Jan 20 02:48:34.996842 containerd[1621]: time="2026-01-20T02:48:34.651183227Z" level=info msg="container event discarded" container=7a79acb5a1217aa31986eab88430ad767f68067658efd4a0901146ca8b7c8c0f type=CONTAINER_STARTED_EVENT Jan 20 02:48:36.349813 containerd[1621]: time="2026-01-20T02:48:36.348691695Z" level=info msg="container event discarded" container=b4f960e68b266e186318a554795c996b617bb3e7120f58876d3e73c592e3296c type=CONTAINER_STARTED_EVENT Jan 20 02:48:36.803957 containerd[1621]: time="2026-01-20T02:48:36.381106579Z" level=info msg="container event discarded" container=7e4cd812afd37042ebc05ea8caf70bee8b22cd169e9421e10b00721a5a575ae4 type=CONTAINER_STARTED_EVENT Jan 20 02:48:37.877948 kubelet[3020]: E0120 02:48:37.847390 3020 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.557s" Jan 20 02:48:39.521865 kubelet[3020]: E0120 02:48:39.492821 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:39.617848 kubelet[3020]: E0120 02:48:39.617505 3020 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.72s" Jan 20 02:48:39.793369 kubelet[3020]: E0120 02:48:39.784874 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:56.245508 kubelet[3020]: E0120 02:48:56.237659 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:25.227889 kubelet[3020]: E0120 02:49:25.220677 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:25.227889 kubelet[3020]: E0120 02:49:25.221693 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:29.230960 kubelet[3020]: E0120 02:49:29.229946 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:30.229403 kubelet[3020]: E0120 02:49:30.229168 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:36.639415 containerd[1621]: time="2026-01-20T02:49:36.639094023Z" level=info msg="container event discarded" container=4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55 type=CONTAINER_CREATED_EVENT Jan 20 02:49:36.639415 containerd[1621]: time="2026-01-20T02:49:36.639163301Z" level=info msg="container event discarded" container=4e8dc7fa62e266e3c499c451584c90a0e9b7a86fa5b0aa252fac057fdc54cb55 type=CONTAINER_STARTED_EVENT Jan 20 02:49:36.948698 containerd[1621]: time="2026-01-20T02:49:36.947934750Z" level=info msg="container event discarded" container=1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc type=CONTAINER_CREATED_EVENT Jan 20 02:49:38.162164 containerd[1621]: time="2026-01-20T02:49:38.162089824Z" level=info msg="container event discarded" container=1a3c1f95ff7516a35af634f56c356f800059d6f3ca1968ad415c28b31289e5bc type=CONTAINER_STARTED_EVENT Jan 20 02:49:40.028529 containerd[1621]: time="2026-01-20T02:49:40.028424418Z" level=info msg="container event discarded" container=3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1 type=CONTAINER_CREATED_EVENT Jan 20 02:49:40.028529 containerd[1621]: time="2026-01-20T02:49:40.028492566Z" level=info msg="container event discarded" container=3ae252aebbba2af91938f13e0e0697f3bdaf9bf60df2810fba322b22fc9ae9f1 type=CONTAINER_STARTED_EVENT Jan 20 02:49:41.876763 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:36706.service - OpenSSH per-connection server daemon (10.0.0.1:36706). Jan 20 02:49:42.534498 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 36706 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:49:42.553966 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:42.633132 systemd-logind[1588]: New session 9 of user core. Jan 20 02:49:42.714996 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:49:44.495845 sshd[4990]: Connection closed by 10.0.0.1 port 36706 Jan 20 02:49:44.550417 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:44.693129 systemd-logind[1588]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:49:44.731867 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:36706.service: Deactivated successfully. Jan 20 02:49:44.813078 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:49:45.005869 systemd-logind[1588]: Removed session 9. Jan 20 02:49:45.231572 containerd[1621]: time="2026-01-20T02:49:45.231490448Z" level=info msg="container event discarded" container=de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3 type=CONTAINER_CREATED_EVENT Jan 20 02:49:46.170344 containerd[1621]: time="2026-01-20T02:49:46.169939824Z" level=info msg="container event discarded" container=de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3 type=CONTAINER_STARTED_EVENT Jan 20 02:49:47.053970 containerd[1621]: time="2026-01-20T02:49:47.053512021Z" level=info msg="container event discarded" container=de05d29e16cd7078142aaa897b5d4a3b3f3c12d5edd030dbfd8438fa15b4c0d3 type=CONTAINER_STOPPED_EVENT Jan 20 02:49:49.570005 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:51154.service - OpenSSH per-connection server daemon (10.0.0.1:51154). Jan 20 02:49:49.998888 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 51154 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:49:50.022154 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:50.118077 systemd-logind[1588]: New session 10 of user core. Jan 20 02:49:50.178336 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:49:52.298334 sshd[5041]: Connection closed by 10.0.0.1 port 51154 Jan 20 02:49:52.300520 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:52.416876 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:51154.service: Deactivated successfully. Jan 20 02:49:52.443425 systemd-logind[1588]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:49:52.462018 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:49:52.605800 systemd-logind[1588]: Removed session 10. Jan 20 02:49:57.222437 kubelet[3020]: E0120 02:49:57.220909 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:57.389895 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:38776.service - OpenSSH per-connection server daemon (10.0.0.1:38776). Jan 20 02:49:58.230092 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 38776 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:49:58.275883 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:58.363120 systemd-logind[1588]: New session 11 of user core. Jan 20 02:49:58.449468 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:49:59.786659 sshd[5094]: Connection closed by 10.0.0.1 port 38776 Jan 20 02:49:59.781994 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:59.845434 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:38776.service: Deactivated successfully. Jan 20 02:49:59.874967 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:49:59.918178 systemd-logind[1588]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:49:59.926521 systemd-logind[1588]: Removed session 11. Jan 20 02:50:02.950099 containerd[1621]: time="2026-01-20T02:50:02.950046276Z" level=info msg="container event discarded" container=75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652 type=CONTAINER_CREATED_EVENT Jan 20 02:50:03.236170 kubelet[3020]: E0120 02:50:03.235917 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:03.939133 containerd[1621]: time="2026-01-20T02:50:03.939051744Z" level=info msg="container event discarded" container=75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652 type=CONTAINER_STARTED_EVENT Jan 20 02:50:04.839385 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:49136.service - OpenSSH per-connection server daemon (10.0.0.1:49136). Jan 20 02:50:04.884661 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:50:05.267180 kubelet[3020]: E0120 02:50:05.240477 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:05.400639 systemd-tmpfiles[5137]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:50:05.400669 systemd-tmpfiles[5137]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:50:05.413556 systemd-tmpfiles[5137]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:50:05.444620 systemd-tmpfiles[5137]: ACLs are not supported, ignoring. Jan 20 02:50:05.444740 systemd-tmpfiles[5137]: ACLs are not supported, ignoring. Jan 20 02:50:05.534065 systemd-tmpfiles[5137]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:50:05.540095 systemd-tmpfiles[5137]: Skipping /boot Jan 20 02:50:05.631382 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:50:05.632139 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:50:05.642414 containerd[1621]: time="2026-01-20T02:50:05.642164113Z" level=info msg="container event discarded" container=75e85aeb4c4d3f1657705072147157295241ad0f665cd12b4fe81be88f60f652 type=CONTAINER_STOPPED_EVENT Jan 20 02:50:05.647133 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 49136 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:05.706081 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:05.882909 systemd-logind[1588]: New session 12 of user core. Jan 20 02:50:05.934080 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:50:06.283851 containerd[1621]: time="2026-01-20T02:50:06.279850035Z" level=info msg="container event discarded" container=d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a type=CONTAINER_CREATED_EVENT Jan 20 02:50:06.857007 sshd[5143]: Connection closed by 10.0.0.1 port 49136 Jan 20 02:50:06.859985 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:06.910541 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:49136.service: Deactivated successfully. Jan 20 02:50:06.961189 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:50:06.978053 systemd-logind[1588]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:50:06.999677 systemd-logind[1588]: Removed session 12. Jan 20 02:50:07.166687 containerd[1621]: time="2026-01-20T02:50:07.147109677Z" level=info msg="container event discarded" container=d83d5216b383ba48aca3064c71b6fc1975411c59ffe3155d3ce54840ef91f61a type=CONTAINER_STARTED_EVENT Jan 20 02:50:11.942116 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:49142.service - OpenSSH per-connection server daemon (10.0.0.1:49142). Jan 20 02:50:12.739935 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 49142 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:12.742768 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:12.901777 systemd-logind[1588]: New session 13 of user core. Jan 20 02:50:12.953027 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:50:13.847370 sshd[5196]: Connection closed by 10.0.0.1 port 49142 Jan 20 02:50:13.863664 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:13.909725 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:49142.service: Deactivated successfully. Jan 20 02:50:13.942819 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:50:13.972417 systemd-logind[1588]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:50:13.995766 systemd-logind[1588]: Removed session 13. Jan 20 02:50:18.947612 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:44224.service - OpenSSH per-connection server daemon (10.0.0.1:44224). Jan 20 02:50:19.314680 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 44224 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:19.316658 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:19.363516 systemd-logind[1588]: New session 14 of user core. Jan 20 02:50:19.374105 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:50:19.911433 sshd[5237]: Connection closed by 10.0.0.1 port 44224 Jan 20 02:50:19.902526 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:19.979850 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:44224.service: Deactivated successfully. Jan 20 02:50:20.013704 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:50:20.034054 systemd-logind[1588]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:50:20.081468 systemd-logind[1588]: Removed session 14. Jan 20 02:50:23.927407 containerd[1621]: time="2026-01-20T02:50:23.925405277Z" level=info msg="container event discarded" container=847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3 type=CONTAINER_CREATED_EVENT Jan 20 02:50:23.927407 containerd[1621]: time="2026-01-20T02:50:23.925532012Z" level=info msg="container event discarded" container=847d0d285e575fb99b850f27873c21c03a28c53c3f00b39f50eb541ece64f2b3 type=CONTAINER_STARTED_EVENT Jan 20 02:50:24.482330 containerd[1621]: time="2026-01-20T02:50:24.481696803Z" level=info msg="container event discarded" container=2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a type=CONTAINER_CREATED_EVENT Jan 20 02:50:24.482330 containerd[1621]: time="2026-01-20T02:50:24.481826044Z" level=info msg="container event discarded" container=2bb8b16e071511ed202cba63af5b6ebdf215052841b32fe7e89283f16aad4b0a type=CONTAINER_STARTED_EVENT Jan 20 02:50:24.731577 containerd[1621]: time="2026-01-20T02:50:24.731141020Z" level=info msg="container event discarded" container=bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d type=CONTAINER_CREATED_EVENT Jan 20 02:50:25.001505 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:49500.service - OpenSSH per-connection server daemon (10.0.0.1:49500). Jan 20 02:50:25.022625 containerd[1621]: time="2026-01-20T02:50:25.022448490Z" level=info msg="container event discarded" container=9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c type=CONTAINER_CREATED_EVENT Jan 20 02:50:25.279594 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 49500 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:25.281137 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:25.351420 systemd-logind[1588]: New session 15 of user core. Jan 20 02:50:25.376827 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:50:25.613580 containerd[1621]: time="2026-01-20T02:50:25.613319702Z" level=info msg="container event discarded" container=bd2f87329de3eeba68b0d6556475e4388233ebfe5c6e99dac54bba66e52e349d type=CONTAINER_STARTED_EVENT Jan 20 02:50:26.140173 containerd[1621]: time="2026-01-20T02:50:26.135590649Z" level=info msg="container event discarded" container=9546652e1b05242925e45123b8c5401886fd4640fbf97cc24e63edffa092195c type=CONTAINER_STARTED_EVENT Jan 20 02:50:26.149424 sshd[5281]: Connection closed by 10.0.0.1 port 49500 Jan 20 02:50:26.164356 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:26.195903 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:49500.service: Deactivated successfully. Jan 20 02:50:26.239131 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:50:26.250345 systemd-logind[1588]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:50:26.277942 systemd-logind[1588]: Removed session 15. Jan 20 02:50:31.266383 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:49502.service - OpenSSH per-connection server daemon (10.0.0.1:49502). Jan 20 02:50:31.968453 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 49502 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:31.991088 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:32.077704 systemd-logind[1588]: New session 16 of user core. Jan 20 02:50:32.109954 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:50:33.264371 sshd[5321]: Connection closed by 10.0.0.1 port 49502 Jan 20 02:50:33.263851 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:33.297592 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:49502.service: Deactivated successfully. Jan 20 02:50:33.329988 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:50:33.370156 systemd-logind[1588]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:50:33.391856 systemd-logind[1588]: Removed session 16. Jan 20 02:50:38.255675 kubelet[3020]: E0120 02:50:38.254812 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:38.351739 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:39202.service - OpenSSH per-connection server daemon (10.0.0.1:39202). Jan 20 02:50:38.917368 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 39202 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:38.960691 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:39.048504 systemd-logind[1588]: New session 17 of user core. Jan 20 02:50:39.075901 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:50:40.326369 sshd[5360]: Connection closed by 10.0.0.1 port 39202 Jan 20 02:50:40.333812 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:40.415885 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:39202.service: Deactivated successfully. Jan 20 02:50:40.472875 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:50:40.483496 systemd-logind[1588]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:50:40.511461 systemd-logind[1588]: Removed session 17. Jan 20 02:50:45.420474 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:50546.service - OpenSSH per-connection server daemon (10.0.0.1:50546). Jan 20 02:50:46.198755 sshd[5404]: Accepted publickey for core from 10.0.0.1 port 50546 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:46.216696 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:46.317847 systemd-logind[1588]: New session 18 of user core. Jan 20 02:50:46.344711 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:50:47.679783 sshd[5420]: Connection closed by 10.0.0.1 port 50546 Jan 20 02:50:47.685926 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:47.804908 systemd-logind[1588]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:50:47.820995 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:50546.service: Deactivated successfully. Jan 20 02:50:47.887689 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:50:47.959403 systemd-logind[1588]: Removed session 18. Jan 20 02:50:48.271416 kubelet[3020]: E0120 02:50:48.268912 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:48.275133 kubelet[3020]: E0120 02:50:48.274117 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:50.222878 kubelet[3020]: E0120 02:50:50.222073 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:52.786768 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:50552.service - OpenSSH per-connection server daemon (10.0.0.1:50552). Jan 20 02:50:53.112538 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 50552 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:53.117389 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:53.153083 systemd-logind[1588]: New session 19 of user core. Jan 20 02:50:53.171739 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:50:54.351455 sshd[5459]: Connection closed by 10.0.0.1 port 50552 Jan 20 02:50:54.358551 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:54.423817 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:50552.service: Deactivated successfully. Jan 20 02:50:54.441919 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:50:54.459917 systemd-logind[1588]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:50:54.474509 systemd-logind[1588]: Removed session 19. Jan 20 02:50:59.372544 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:34284.service - OpenSSH per-connection server daemon (10.0.0.1:34284). Jan 20 02:50:59.948581 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 34284 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:50:59.962845 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:00.036808 systemd-logind[1588]: New session 20 of user core. Jan 20 02:51:00.077817 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:51:01.902040 sshd[5498]: Connection closed by 10.0.0.1 port 34284 Jan 20 02:51:01.908957 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:01.973109 systemd-logind[1588]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:51:01.974767 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:34284.service: Deactivated successfully. Jan 20 02:51:01.999041 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:51:02.044925 systemd-logind[1588]: Removed session 20. Jan 20 02:51:07.014766 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:59156.service - OpenSSH per-connection server daemon (10.0.0.1:59156). Jan 20 02:51:08.111857 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 59156 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:08.145133 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:08.317180 systemd-logind[1588]: New session 21 of user core. Jan 20 02:51:08.368997 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:51:09.396657 sshd[5556]: Connection closed by 10.0.0.1 port 59156 Jan 20 02:51:09.405626 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:09.464706 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:59156.service: Deactivated successfully. Jan 20 02:51:09.503165 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:51:09.532139 systemd-logind[1588]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:51:09.542455 systemd-logind[1588]: Removed session 21. Jan 20 02:51:09.569149 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:59168.service - OpenSSH per-connection server daemon (10.0.0.1:59168). Jan 20 02:51:09.838117 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 59168 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:09.850599 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:09.916851 systemd-logind[1588]: New session 22 of user core. Jan 20 02:51:09.962677 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:51:11.385633 sshd[5574]: Connection closed by 10.0.0.1 port 59168 Jan 20 02:51:11.461866 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:11.544834 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:59178.service - OpenSSH per-connection server daemon (10.0.0.1:59178). Jan 20 02:51:11.629567 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:59168.service: Deactivated successfully. Jan 20 02:51:11.679353 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:51:11.695810 systemd-logind[1588]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:51:11.703554 systemd-logind[1588]: Removed session 22. Jan 20 02:51:12.372015 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 59178 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:12.388032 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:12.491037 systemd-logind[1588]: New session 23 of user core. Jan 20 02:51:12.553582 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:51:13.534475 sshd[5606]: Connection closed by 10.0.0.1 port 59178 Jan 20 02:51:13.539859 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:13.598020 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:59178.service: Deactivated successfully. Jan 20 02:51:13.639094 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:51:13.735681 systemd-logind[1588]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:51:13.741365 systemd-logind[1588]: Removed session 23. Jan 20 02:51:18.714837 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). Jan 20 02:51:19.314601 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:19.334178 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:19.491905 systemd-logind[1588]: New session 24 of user core. Jan 20 02:51:19.532145 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:51:21.186412 sshd[5659]: Connection closed by 10.0.0.1 port 50852 Jan 20 02:51:21.179040 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:21.237694 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:50852.service: Deactivated successfully. Jan 20 02:51:21.276446 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:51:21.304057 systemd-logind[1588]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:51:21.316945 systemd-logind[1588]: Removed session 24. Jan 20 02:51:26.339688 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). Jan 20 02:51:27.275433 kubelet[3020]: E0120 02:51:27.273655 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:27.616049 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 51970 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:27.665121 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:27.829989 systemd-logind[1588]: New session 25 of user core. Jan 20 02:51:27.911412 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:51:29.234347 kubelet[3020]: E0120 02:51:29.231772 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:30.174494 sshd[5698]: Connection closed by 10.0.0.1 port 51970 Jan 20 02:51:30.167117 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:30.273152 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:51970.service: Deactivated successfully. Jan 20 02:51:30.339159 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:51:30.419868 systemd-logind[1588]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:51:30.444987 systemd-logind[1588]: Removed session 25. Jan 20 02:51:31.233132 kubelet[3020]: E0120 02:51:31.226121 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:35.594958 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:43308.service - OpenSSH per-connection server daemon (10.0.0.1:43308). Jan 20 02:51:36.311518 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 43308 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:36.325927 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:36.388853 systemd-logind[1588]: New session 26 of user core. Jan 20 02:51:36.414525 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:51:37.511433 sshd[5758]: Connection closed by 10.0.0.1 port 43308 Jan 20 02:51:37.510430 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:37.545419 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:43308.service: Deactivated successfully. Jan 20 02:51:37.580113 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:51:37.629136 systemd-logind[1588]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:51:37.651826 systemd-logind[1588]: Removed session 26. Jan 20 02:51:42.656119 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:43324.service - OpenSSH per-connection server daemon (10.0.0.1:43324). Jan 20 02:51:43.571187 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:43.625642 sshd-session[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:43.700082 systemd-logind[1588]: New session 27 of user core. Jan 20 02:51:43.771013 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:51:45.274595 sshd[5797]: Connection closed by 10.0.0.1 port 43324 Jan 20 02:51:45.275826 sshd-session[5793]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:45.320172 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:43324.service: Deactivated successfully. Jan 20 02:51:45.346486 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:51:45.396501 systemd-logind[1588]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:51:45.440059 systemd-logind[1588]: Removed session 27. Jan 20 02:51:50.333701 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:33056.service - OpenSSH per-connection server daemon (10.0.0.1:33056). Jan 20 02:51:50.738686 sshd[5836]: Accepted publickey for core from 10.0.0.1 port 33056 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:50.749381 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:50.781015 systemd-logind[1588]: New session 28 of user core. Jan 20 02:51:50.789941 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:51:51.687035 sshd[5840]: Connection closed by 10.0.0.1 port 33056 Jan 20 02:51:51.685449 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:51.731563 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:33056.service: Deactivated successfully. Jan 20 02:51:51.751739 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:51:51.768628 systemd-logind[1588]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:51:51.789328 systemd-logind[1588]: Removed session 28. Jan 20 02:51:53.247143 kubelet[3020]: E0120 02:51:53.240926 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:56.787910 systemd[1]: Started sshd@27-10.0.0.117:22-10.0.0.1:33374.service - OpenSSH per-connection server daemon (10.0.0.1:33374). Jan 20 02:51:57.214485 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 33374 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:51:57.230564 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:57.320533 systemd-logind[1588]: New session 29 of user core. Jan 20 02:51:57.366037 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:51:58.262429 sshd[5878]: Connection closed by 10.0.0.1 port 33374 Jan 20 02:51:58.264148 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:58.315669 systemd[1]: sshd@27-10.0.0.117:22-10.0.0.1:33374.service: Deactivated successfully. Jan 20 02:51:58.355663 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:51:58.416404 systemd-logind[1588]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:51:58.428152 systemd-logind[1588]: Removed session 29. Jan 20 02:52:02.247549 kubelet[3020]: E0120 02:52:02.247499 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:03.239497 kubelet[3020]: E0120 02:52:03.233615 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:03.254583 kubelet[3020]: E0120 02:52:03.242186 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:03.440594 systemd[1]: Started sshd@28-10.0.0.117:22-10.0.0.1:33382.service - OpenSSH per-connection server daemon (10.0.0.1:33382). Jan 20 02:52:03.932849 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 33382 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:03.955710 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:04.072173 systemd-logind[1588]: New session 30 of user core. Jan 20 02:52:04.108521 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:52:05.675864 sshd[5929]: Connection closed by 10.0.0.1 port 33382 Jan 20 02:52:05.671817 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:05.830814 systemd[1]: sshd@28-10.0.0.117:22-10.0.0.1:33382.service: Deactivated successfully. Jan 20 02:52:05.914672 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:52:05.939005 systemd-logind[1588]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:52:05.944133 systemd-logind[1588]: Removed session 30. Jan 20 02:52:10.812865 systemd[1]: Started sshd@29-10.0.0.117:22-10.0.0.1:57020.service - OpenSSH per-connection server daemon (10.0.0.1:57020). Jan 20 02:52:11.739868 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 57020 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:11.779146 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:11.847476 systemd-logind[1588]: New session 31 of user core. Jan 20 02:52:11.897770 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:52:13.323532 sshd[5975]: Connection closed by 10.0.0.1 port 57020 Jan 20 02:52:13.322391 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:13.420905 systemd[1]: sshd@29-10.0.0.117:22-10.0.0.1:57020.service: Deactivated successfully. Jan 20 02:52:13.463762 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:52:13.542536 systemd-logind[1588]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:52:13.592879 systemd-logind[1588]: Removed session 31. Jan 20 02:52:18.406734 systemd[1]: Started sshd@30-10.0.0.117:22-10.0.0.1:42430.service - OpenSSH per-connection server daemon (10.0.0.1:42430). Jan 20 02:52:18.847621 sshd[6009]: Accepted publickey for core from 10.0.0.1 port 42430 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:18.876717 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:18.951885 systemd-logind[1588]: New session 32 of user core. Jan 20 02:52:18.973823 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:52:20.573581 sshd[6013]: Connection closed by 10.0.0.1 port 42430 Jan 20 02:52:20.570486 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:20.678904 systemd[1]: sshd@30-10.0.0.117:22-10.0.0.1:42430.service: Deactivated successfully. Jan 20 02:52:20.745689 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:52:20.846714 systemd-logind[1588]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:52:20.876630 systemd-logind[1588]: Removed session 32. Jan 20 02:52:25.665022 systemd[1]: Started sshd@31-10.0.0.117:22-10.0.0.1:41214.service - OpenSSH per-connection server daemon (10.0.0.1:41214). Jan 20 02:52:26.281612 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 41214 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:26.286839 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:26.352688 systemd-logind[1588]: New session 33 of user core. Jan 20 02:52:26.384475 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:52:27.269044 sshd[6072]: Connection closed by 10.0.0.1 port 41214 Jan 20 02:52:27.273749 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:27.330576 systemd[1]: sshd@31-10.0.0.117:22-10.0.0.1:41214.service: Deactivated successfully. Jan 20 02:52:27.381979 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:52:27.411123 systemd-logind[1588]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:52:27.413919 systemd-logind[1588]: Removed session 33. Jan 20 02:52:32.342961 systemd[1]: Started sshd@32-10.0.0.117:22-10.0.0.1:41228.service - OpenSSH per-connection server daemon (10.0.0.1:41228). Jan 20 02:52:32.948838 sshd[6109]: Accepted publickey for core from 10.0.0.1 port 41228 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:32.964513 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:33.025015 systemd-logind[1588]: New session 34 of user core. Jan 20 02:52:33.088674 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:52:34.121426 sshd[6115]: Connection closed by 10.0.0.1 port 41228 Jan 20 02:52:34.122755 sshd-session[6109]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:34.152518 systemd[1]: sshd@32-10.0.0.117:22-10.0.0.1:41228.service: Deactivated successfully. Jan 20 02:52:34.176731 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:52:34.224551 systemd-logind[1588]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:52:34.239946 systemd-logind[1588]: Removed session 34. Jan 20 02:52:39.226422 systemd[1]: Started sshd@33-10.0.0.117:22-10.0.0.1:60298.service - OpenSSH per-connection server daemon (10.0.0.1:60298). Jan 20 02:52:39.739855 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 60298 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:39.751625 sshd-session[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:39.820611 systemd-logind[1588]: New session 35 of user core. Jan 20 02:52:39.859782 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:52:40.932437 sshd[6156]: Connection closed by 10.0.0.1 port 60298 Jan 20 02:52:40.933656 sshd-session[6149]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:40.993891 systemd[1]: sshd@33-10.0.0.117:22-10.0.0.1:60298.service: Deactivated successfully. Jan 20 02:52:41.021359 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:52:41.048544 systemd-logind[1588]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:52:41.067870 systemd[1]: Started sshd@34-10.0.0.117:22-10.0.0.1:60312.service - OpenSSH per-connection server daemon (10.0.0.1:60312). Jan 20 02:52:41.085035 systemd-logind[1588]: Removed session 35. Jan 20 02:52:41.233362 kubelet[3020]: E0120 02:52:41.232998 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:41.448847 sshd[6169]: Accepted publickey for core from 10.0.0.1 port 60312 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:41.461068 sshd-session[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:41.539359 systemd-logind[1588]: New session 36 of user core. Jan 20 02:52:41.552932 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:52:42.280139 kubelet[3020]: E0120 02:52:42.261087 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:43.926425 sshd[6183]: Connection closed by 10.0.0.1 port 60312 Jan 20 02:52:43.927115 sshd-session[6169]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:43.980786 systemd[1]: sshd@34-10.0.0.117:22-10.0.0.1:60312.service: Deactivated successfully. Jan 20 02:52:44.018573 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:52:44.048521 systemd-logind[1588]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:52:44.074981 systemd-logind[1588]: Removed session 36. Jan 20 02:52:44.104775 systemd[1]: Started sshd@35-10.0.0.117:22-10.0.0.1:60324.service - OpenSSH per-connection server daemon (10.0.0.1:60324). Jan 20 02:52:44.466789 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 60324 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:44.499027 sshd-session[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:44.560615 systemd-logind[1588]: New session 37 of user core. Jan 20 02:52:44.575700 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:52:49.205920 sshd[6213]: Connection closed by 10.0.0.1 port 60324 Jan 20 02:52:49.199637 sshd-session[6209]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:49.291714 systemd[1]: sshd@35-10.0.0.117:22-10.0.0.1:60324.service: Deactivated successfully. Jan 20 02:52:49.311640 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:52:49.319116 systemd[1]: session-37.scope: Consumed 1.392s CPU time, 41M memory peak. Jan 20 02:52:49.331909 systemd-logind[1588]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:52:49.373560 systemd[1]: Started sshd@36-10.0.0.117:22-10.0.0.1:47788.service - OpenSSH per-connection server daemon (10.0.0.1:47788). Jan 20 02:52:49.381712 systemd-logind[1588]: Removed session 37. Jan 20 02:52:49.847399 sshd[6255]: Accepted publickey for core from 10.0.0.1 port 47788 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:49.849939 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:49.946932 systemd-logind[1588]: New session 38 of user core. Jan 20 02:52:49.966861 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:52:52.026779 sshd[6259]: Connection closed by 10.0.0.1 port 47788 Jan 20 02:52:52.028691 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:52.213889 systemd[1]: Started sshd@37-10.0.0.117:22-10.0.0.1:47804.service - OpenSSH per-connection server daemon (10.0.0.1:47804). Jan 20 02:52:52.222064 systemd[1]: sshd@36-10.0.0.117:22-10.0.0.1:47788.service: Deactivated successfully. Jan 20 02:52:52.290006 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:52:52.415089 systemd-logind[1588]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:52:52.521706 systemd-logind[1588]: Removed session 38. Jan 20 02:52:54.318103 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 47804 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:52:54.338671 kubelet[3020]: E0120 02:52:54.332094 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:54.341067 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:52:55.798025 systemd-logind[1588]: New session 39 of user core. Jan 20 02:52:55.876738 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:52:57.357982 sshd[6294]: Connection closed by 10.0.0.1 port 47804 Jan 20 02:52:57.359011 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:57.417071 systemd[1]: sshd@37-10.0.0.117:22-10.0.0.1:47804.service: Deactivated successfully. Jan 20 02:52:57.476484 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:52:57.594098 systemd-logind[1588]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:52:57.757942 systemd-logind[1588]: Removed session 39. Jan 20 02:52:58.331120 kubelet[3020]: E0120 02:52:58.326914 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:02.463102 systemd[1]: Started sshd@38-10.0.0.117:22-10.0.0.1:51622.service - OpenSSH per-connection server daemon (10.0.0.1:51622). Jan 20 02:53:03.060890 sshd[6331]: Accepted publickey for core from 10.0.0.1 port 51622 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:03.085533 sshd-session[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:03.200607 systemd-logind[1588]: New session 40 of user core. Jan 20 02:53:03.227086 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:53:04.614753 sshd[6336]: Connection closed by 10.0.0.1 port 51622 Jan 20 02:53:04.591674 sshd-session[6331]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:04.650019 systemd-logind[1588]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:53:04.655092 systemd[1]: sshd@38-10.0.0.117:22-10.0.0.1:51622.service: Deactivated successfully. Jan 20 02:53:04.674181 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:53:04.767752 systemd-logind[1588]: Removed session 40. Jan 20 02:53:07.232068 kubelet[3020]: E0120 02:53:07.228936 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:09.712144 systemd[1]: Started sshd@39-10.0.0.117:22-10.0.0.1:42432.service - OpenSSH per-connection server daemon (10.0.0.1:42432). Jan 20 02:53:10.254648 kubelet[3020]: E0120 02:53:10.251874 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:10.462006 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 42432 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:10.503813 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:10.586693 systemd-logind[1588]: New session 41 of user core. Jan 20 02:53:10.675048 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:53:12.011012 sshd[6379]: Connection closed by 10.0.0.1 port 42432 Jan 20 02:53:12.024678 sshd-session[6375]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:12.075924 systemd[1]: sshd@39-10.0.0.117:22-10.0.0.1:42432.service: Deactivated successfully. Jan 20 02:53:12.136105 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:53:12.179497 systemd-logind[1588]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:53:12.201544 systemd-logind[1588]: Removed session 41. Jan 20 02:53:15.223071 kubelet[3020]: E0120 02:53:15.222589 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:17.144098 systemd[1]: Started sshd@40-10.0.0.117:22-10.0.0.1:51264.service - OpenSSH per-connection server daemon (10.0.0.1:51264). Jan 20 02:53:17.868057 sshd[6414]: Accepted publickey for core from 10.0.0.1 port 51264 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:17.912946 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:17.996768 systemd-logind[1588]: New session 42 of user core. Jan 20 02:53:18.077950 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:53:19.702366 sshd[6431]: Connection closed by 10.0.0.1 port 51264 Jan 20 02:53:19.701900 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:19.754173 systemd[1]: sshd@40-10.0.0.117:22-10.0.0.1:51264.service: Deactivated successfully. Jan 20 02:53:19.780577 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:53:19.853772 systemd-logind[1588]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:53:19.882083 systemd-logind[1588]: Removed session 42. Jan 20 02:53:24.799668 systemd[1]: Started sshd@41-10.0.0.117:22-10.0.0.1:55440.service - OpenSSH per-connection server daemon (10.0.0.1:55440). Jan 20 02:53:25.810735 sshd[6472]: Accepted publickey for core from 10.0.0.1 port 55440 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:25.823840 sshd-session[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:25.952788 systemd-logind[1588]: New session 43 of user core. Jan 20 02:53:25.979706 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:53:27.233663 sshd[6476]: Connection closed by 10.0.0.1 port 55440 Jan 20 02:53:27.253133 sshd-session[6472]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:27.340347 systemd[1]: sshd@41-10.0.0.117:22-10.0.0.1:55440.service: Deactivated successfully. Jan 20 02:53:27.371956 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:53:27.386047 systemd-logind[1588]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:53:27.410165 systemd-logind[1588]: Removed session 43. Jan 20 02:53:32.315759 systemd[1]: Started sshd@42-10.0.0.117:22-10.0.0.1:55456.service - OpenSSH per-connection server daemon (10.0.0.1:55456). Jan 20 02:53:33.203795 sshd[6512]: Accepted publickey for core from 10.0.0.1 port 55456 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:33.250479 sshd-session[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:33.392175 systemd-logind[1588]: New session 44 of user core. Jan 20 02:53:33.470391 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:53:34.667493 sshd[6516]: Connection closed by 10.0.0.1 port 55456 Jan 20 02:53:34.674514 sshd-session[6512]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:34.738069 systemd[1]: sshd@42-10.0.0.117:22-10.0.0.1:55456.service: Deactivated successfully. Jan 20 02:53:34.789504 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:53:34.842784 systemd-logind[1588]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:53:34.878162 systemd-logind[1588]: Removed session 44. Jan 20 02:53:39.814717 systemd[1]: Started sshd@43-10.0.0.117:22-10.0.0.1:49028.service - OpenSSH per-connection server daemon (10.0.0.1:49028). Jan 20 02:53:40.951054 sshd[6549]: Accepted publickey for core from 10.0.0.1 port 49028 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:40.977107 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:41.119461 systemd-logind[1588]: New session 45 of user core. Jan 20 02:53:41.164948 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:53:42.473353 sshd[6573]: Connection closed by 10.0.0.1 port 49028 Jan 20 02:53:42.481058 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:42.527972 systemd[1]: sshd@43-10.0.0.117:22-10.0.0.1:49028.service: Deactivated successfully. Jan 20 02:53:42.560855 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:53:42.586413 systemd-logind[1588]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:53:42.621629 systemd-logind[1588]: Removed session 45. Jan 20 02:53:47.605883 systemd[1]: Started sshd@44-10.0.0.117:22-10.0.0.1:54556.service - OpenSSH per-connection server daemon (10.0.0.1:54556). Jan 20 02:53:48.489148 sshd[6609]: Accepted publickey for core from 10.0.0.1 port 54556 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:48.514569 sshd-session[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:48.610658 systemd-logind[1588]: New session 46 of user core. Jan 20 02:53:48.635637 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:53:49.501484 sshd[6613]: Connection closed by 10.0.0.1 port 54556 Jan 20 02:53:49.501680 sshd-session[6609]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:49.532901 systemd[1]: sshd@44-10.0.0.117:22-10.0.0.1:54556.service: Deactivated successfully. Jan 20 02:53:49.546560 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:53:49.610572 systemd-logind[1588]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:53:49.613431 systemd-logind[1588]: Removed session 46. Jan 20 02:53:54.643979 systemd[1]: Started sshd@45-10.0.0.117:22-10.0.0.1:40946.service - OpenSSH per-connection server daemon (10.0.0.1:40946). Jan 20 02:53:55.444446 sshd[6647]: Accepted publickey for core from 10.0.0.1 port 40946 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:53:55.484554 sshd-session[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:53:55.632666 systemd-logind[1588]: New session 47 of user core. Jan 20 02:53:55.705130 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:53:57.058112 sshd[6651]: Connection closed by 10.0.0.1 port 40946 Jan 20 02:53:57.060007 sshd-session[6647]: pam_unix(sshd:session): session closed for user core Jan 20 02:53:57.077654 systemd[1]: sshd@45-10.0.0.117:22-10.0.0.1:40946.service: Deactivated successfully. Jan 20 02:53:57.101924 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:53:57.137452 systemd-logind[1588]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:53:57.148619 systemd-logind[1588]: Removed session 47. Jan 20 02:54:02.264580 kubelet[3020]: E0120 02:54:02.237177 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:02.290018 systemd[1]: Started sshd@46-10.0.0.117:22-10.0.0.1:40962.service - OpenSSH per-connection server daemon (10.0.0.1:40962). Jan 20 02:54:03.172763 sshd[6690]: Accepted publickey for core from 10.0.0.1 port 40962 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:03.167981 sshd-session[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:03.250915 kubelet[3020]: E0120 02:54:03.250068 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:03.277744 systemd-logind[1588]: New session 48 of user core. Jan 20 02:54:03.345438 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:54:05.216822 sshd[6708]: Connection closed by 10.0.0.1 port 40962 Jan 20 02:54:05.231091 sshd-session[6690]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:05.330589 kubelet[3020]: E0120 02:54:05.317018 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:05.359983 systemd[1]: sshd@46-10.0.0.117:22-10.0.0.1:40962.service: Deactivated successfully. Jan 20 02:54:05.415565 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:54:05.547024 systemd-logind[1588]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:54:05.575630 systemd-logind[1588]: Removed session 48. Jan 20 02:54:10.376458 systemd[1]: Started sshd@47-10.0.0.117:22-10.0.0.1:38162.service - OpenSSH per-connection server daemon (10.0.0.1:38162). Jan 20 02:54:10.925797 sshd[6741]: Accepted publickey for core from 10.0.0.1 port 38162 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:10.950995 sshd-session[6741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:11.030867 systemd-logind[1588]: New session 49 of user core. Jan 20 02:54:11.147453 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:54:12.907415 sshd[6745]: Connection closed by 10.0.0.1 port 38162 Jan 20 02:54:12.908388 sshd-session[6741]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:13.012796 systemd[1]: sshd@47-10.0.0.117:22-10.0.0.1:38162.service: Deactivated successfully. Jan 20 02:54:13.069911 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:54:13.120909 systemd-logind[1588]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:54:13.149907 systemd-logind[1588]: Removed session 49. Jan 20 02:54:17.227659 kubelet[3020]: E0120 02:54:17.227609 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:18.011723 systemd[1]: Started sshd@48-10.0.0.117:22-10.0.0.1:58374.service - OpenSSH per-connection server daemon (10.0.0.1:58374). Jan 20 02:54:18.556842 sshd[6787]: Accepted publickey for core from 10.0.0.1 port 58374 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:18.593173 sshd-session[6787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:18.662728 systemd-logind[1588]: New session 50 of user core. Jan 20 02:54:18.710175 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:54:19.900177 sshd[6791]: Connection closed by 10.0.0.1 port 58374 Jan 20 02:54:19.898590 sshd-session[6787]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:19.939780 systemd[1]: sshd@48-10.0.0.117:22-10.0.0.1:58374.service: Deactivated successfully. Jan 20 02:54:19.955824 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:54:19.979466 systemd-logind[1588]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:54:20.007328 systemd-logind[1588]: Removed session 50. Jan 20 02:54:20.230424 kubelet[3020]: E0120 02:54:20.228759 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:21.228393 kubelet[3020]: E0120 02:54:21.224813 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:25.028367 systemd[1]: Started sshd@49-10.0.0.117:22-10.0.0.1:40336.service - OpenSSH per-connection server daemon (10.0.0.1:40336). Jan 20 02:54:25.607400 sshd[6834]: Accepted publickey for core from 10.0.0.1 port 40336 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:25.647124 sshd-session[6834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:25.798922 systemd-logind[1588]: New session 51 of user core. Jan 20 02:54:25.859520 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:54:26.712192 sshd[6843]: Connection closed by 10.0.0.1 port 40336 Jan 20 02:54:26.719725 sshd-session[6834]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:26.779551 systemd[1]: sshd@49-10.0.0.117:22-10.0.0.1:40336.service: Deactivated successfully. Jan 20 02:54:26.807728 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:54:26.820596 systemd-logind[1588]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:54:26.839136 systemd-logind[1588]: Removed session 51. Jan 20 02:54:31.805951 systemd[1]: Started sshd@50-10.0.0.117:22-10.0.0.1:40338.service - OpenSSH per-connection server daemon (10.0.0.1:40338). Jan 20 02:54:32.330674 sshd[6878]: Accepted publickey for core from 10.0.0.1 port 40338 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:32.356527 sshd-session[6878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:32.425433 systemd-logind[1588]: New session 52 of user core. Jan 20 02:54:32.482475 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:54:33.747361 sshd[6883]: Connection closed by 10.0.0.1 port 40338 Jan 20 02:54:33.748973 sshd-session[6878]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:33.804628 systemd-logind[1588]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:54:33.816747 systemd[1]: sshd@50-10.0.0.117:22-10.0.0.1:40338.service: Deactivated successfully. Jan 20 02:54:33.840766 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:54:33.889747 systemd-logind[1588]: Removed session 52. Jan 20 02:54:38.833586 systemd[1]: Started sshd@51-10.0.0.117:22-10.0.0.1:60526.service - OpenSSH per-connection server daemon (10.0.0.1:60526). Jan 20 02:54:39.350719 sshd[6923]: Accepted publickey for core from 10.0.0.1 port 60526 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:39.382055 sshd-session[6923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:39.435025 systemd-logind[1588]: New session 53 of user core. Jan 20 02:54:39.472945 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:54:40.379439 sshd[6927]: Connection closed by 10.0.0.1 port 60526 Jan 20 02:54:40.382080 sshd-session[6923]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:40.402912 systemd[1]: sshd@51-10.0.0.117:22-10.0.0.1:60526.service: Deactivated successfully. Jan 20 02:54:40.419382 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:54:40.434081 systemd-logind[1588]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:54:40.450721 systemd-logind[1588]: Removed session 53. Jan 20 02:54:41.226453 kubelet[3020]: E0120 02:54:41.220144 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:45.463996 systemd[1]: Started sshd@52-10.0.0.117:22-10.0.0.1:60022.service - OpenSSH per-connection server daemon (10.0.0.1:60022). Jan 20 02:54:46.108972 sshd[6962]: Accepted publickey for core from 10.0.0.1 port 60022 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:46.146053 sshd-session[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:46.277705 systemd-logind[1588]: New session 54 of user core. Jan 20 02:54:46.346997 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:54:47.553675 sshd[6978]: Connection closed by 10.0.0.1 port 60022 Jan 20 02:54:47.551927 sshd-session[6962]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:47.604050 systemd[1]: sshd@52-10.0.0.117:22-10.0.0.1:60022.service: Deactivated successfully. Jan 20 02:54:47.608730 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:54:47.618975 systemd-logind[1588]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:54:47.621741 systemd-logind[1588]: Removed session 54. Jan 20 02:54:52.694848 systemd[1]: Started sshd@53-10.0.0.117:22-10.0.0.1:60024.service - OpenSSH per-connection server daemon (10.0.0.1:60024). Jan 20 02:54:53.236458 sshd[7013]: Accepted publickey for core from 10.0.0.1 port 60024 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:54:53.260823 sshd-session[7013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:53.382456 systemd-logind[1588]: New session 55 of user core. Jan 20 02:54:53.402764 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:54:55.239584 sshd[7017]: Connection closed by 10.0.0.1 port 60024 Jan 20 02:54:55.245715 sshd-session[7013]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:55.284579 systemd[1]: sshd@53-10.0.0.117:22-10.0.0.1:60024.service: Deactivated successfully. Jan 20 02:54:55.302869 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:54:55.314999 systemd-logind[1588]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:54:55.329599 systemd-logind[1588]: Removed session 55. Jan 20 02:55:00.374152 systemd[1]: Started sshd@54-10.0.0.117:22-10.0.0.1:54208.service - OpenSSH per-connection server daemon (10.0.0.1:54208). Jan 20 02:55:01.023992 sshd[7057]: Accepted publickey for core from 10.0.0.1 port 54208 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:01.052511 sshd-session[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:01.134926 systemd-logind[1588]: New session 56 of user core. Jan 20 02:55:01.157124 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:55:02.709766 sshd[7061]: Connection closed by 10.0.0.1 port 54208 Jan 20 02:55:02.742754 sshd-session[7057]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:02.824986 systemd[1]: sshd@54-10.0.0.117:22-10.0.0.1:54208.service: Deactivated successfully. Jan 20 02:55:02.860977 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:55:02.904889 systemd-logind[1588]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:55:02.920960 systemd-logind[1588]: Removed session 56. Jan 20 02:55:07.789769 systemd[1]: Started sshd@55-10.0.0.117:22-10.0.0.1:41694.service - OpenSSH per-connection server daemon (10.0.0.1:41694). Jan 20 02:55:08.147010 sshd[7094]: Accepted publickey for core from 10.0.0.1 port 41694 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:08.192061 sshd-session[7094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:08.253667 kubelet[3020]: E0120 02:55:08.251163 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:08.301580 systemd-logind[1588]: New session 57 of user core. Jan 20 02:55:08.357802 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:55:09.620029 sshd[7112]: Connection closed by 10.0.0.1 port 41694 Jan 20 02:55:09.645892 sshd-session[7094]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:09.730313 systemd[1]: sshd@55-10.0.0.117:22-10.0.0.1:41694.service: Deactivated successfully. Jan 20 02:55:09.750108 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:55:09.760879 systemd-logind[1588]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:55:09.778653 systemd-logind[1588]: Removed session 57. Jan 20 02:55:14.830397 systemd[1]: Started sshd@56-10.0.0.117:22-10.0.0.1:36546.service - OpenSSH per-connection server daemon (10.0.0.1:36546). Jan 20 02:55:15.333824 sshd[7148]: Accepted publickey for core from 10.0.0.1 port 36546 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:15.355899 sshd-session[7148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:15.410062 systemd-logind[1588]: New session 58 of user core. Jan 20 02:55:15.430187 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:55:16.608700 sshd[7158]: Connection closed by 10.0.0.1 port 36546 Jan 20 02:55:16.613406 sshd-session[7148]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:16.683940 systemd[1]: sshd@56-10.0.0.117:22-10.0.0.1:36546.service: Deactivated successfully. Jan 20 02:55:16.720873 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:55:16.749144 systemd-logind[1588]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:55:16.778418 systemd-logind[1588]: Removed session 58. Jan 20 02:55:17.222767 kubelet[3020]: E0120 02:55:17.221837 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:21.720919 systemd[1]: Started sshd@57-10.0.0.117:22-10.0.0.1:36548.service - OpenSSH per-connection server daemon (10.0.0.1:36548). Jan 20 02:55:22.335954 sshd[7192]: Accepted publickey for core from 10.0.0.1 port 36548 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:22.353010 sshd-session[7192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:22.420375 systemd-logind[1588]: New session 59 of user core. Jan 20 02:55:22.487915 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:55:23.222993 kubelet[3020]: E0120 02:55:23.221896 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:23.307710 sshd[7196]: Connection closed by 10.0.0.1 port 36548 Jan 20 02:55:23.304635 sshd-session[7192]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:23.334806 systemd[1]: sshd@57-10.0.0.117:22-10.0.0.1:36548.service: Deactivated successfully. Jan 20 02:55:23.339089 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:55:23.347134 systemd-logind[1588]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:55:23.384113 systemd-logind[1588]: Removed session 59. Jan 20 02:55:24.238468 kubelet[3020]: E0120 02:55:24.234632 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:26.269914 kubelet[3020]: E0120 02:55:26.254143 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:28.412149 systemd[1]: Started sshd@58-10.0.0.117:22-10.0.0.1:34194.service - OpenSSH per-connection server daemon (10.0.0.1:34194). Jan 20 02:55:28.940093 sshd[7232]: Accepted publickey for core from 10.0.0.1 port 34194 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:29.004415 sshd-session[7232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:29.091802 systemd-logind[1588]: New session 60 of user core. Jan 20 02:55:29.129083 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:55:30.245707 sshd[7236]: Connection closed by 10.0.0.1 port 34194 Jan 20 02:55:30.249689 sshd-session[7232]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:30.312094 systemd-logind[1588]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:55:30.318877 systemd[1]: sshd@58-10.0.0.117:22-10.0.0.1:34194.service: Deactivated successfully. Jan 20 02:55:30.338423 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:55:30.374174 systemd-logind[1588]: Removed session 60. Jan 20 02:55:35.502829 systemd[1]: Started sshd@59-10.0.0.117:22-10.0.0.1:49798.service - OpenSSH per-connection server daemon (10.0.0.1:49798). Jan 20 02:55:36.382097 sshd[7273]: Accepted publickey for core from 10.0.0.1 port 49798 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:36.398779 sshd-session[7273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:36.531799 systemd-logind[1588]: New session 61 of user core. Jan 20 02:55:36.585787 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:55:37.737422 sshd[7294]: Connection closed by 10.0.0.1 port 49798 Jan 20 02:55:37.737967 sshd-session[7273]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:37.788659 systemd[1]: sshd@59-10.0.0.117:22-10.0.0.1:49798.service: Deactivated successfully. Jan 20 02:55:37.816924 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:55:37.847069 systemd-logind[1588]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:55:37.878743 systemd-logind[1588]: Removed session 61. Jan 20 02:55:42.921015 systemd[1]: Started sshd@60-10.0.0.117:22-10.0.0.1:49812.service - OpenSSH per-connection server daemon (10.0.0.1:49812). Jan 20 02:55:43.479392 sshd[7330]: Accepted publickey for core from 10.0.0.1 port 49812 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:43.504604 sshd-session[7330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:43.573740 systemd-logind[1588]: New session 62 of user core. Jan 20 02:55:43.599041 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:55:45.007073 sshd[7334]: Connection closed by 10.0.0.1 port 49812 Jan 20 02:55:45.006430 sshd-session[7330]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:45.060545 systemd[1]: sshd@60-10.0.0.117:22-10.0.0.1:49812.service: Deactivated successfully. Jan 20 02:55:45.094091 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:55:45.117582 systemd-logind[1588]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:55:45.131398 systemd-logind[1588]: Removed session 62. Jan 20 02:55:45.227496 kubelet[3020]: E0120 02:55:45.224085 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:45.247169 kubelet[3020]: E0120 02:55:45.245884 3020 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:50.142447 systemd[1]: Started sshd@61-10.0.0.117:22-10.0.0.1:41750.service - OpenSSH per-connection server daemon (10.0.0.1:41750). Jan 20 02:55:50.878154 sshd[7369]: Accepted publickey for core from 10.0.0.1 port 41750 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:50.898611 sshd-session[7369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:50.948500 systemd-logind[1588]: New session 63 of user core. Jan 20 02:55:50.993845 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:55:51.997064 sshd[7373]: Connection closed by 10.0.0.1 port 41750 Jan 20 02:55:52.018879 sshd-session[7369]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:52.095653 systemd[1]: sshd@61-10.0.0.117:22-10.0.0.1:41750.service: Deactivated successfully. Jan 20 02:55:52.119673 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:55:52.188945 systemd-logind[1588]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:55:52.216842 systemd-logind[1588]: Removed session 63. Jan 20 02:55:57.255825 systemd[1]: Started sshd@62-10.0.0.117:22-10.0.0.1:41474.service - OpenSSH per-connection server daemon (10.0.0.1:41474). Jan 20 02:55:57.893032 sshd[7411]: Accepted publickey for core from 10.0.0.1 port 41474 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:55:57.892322 sshd-session[7411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:57.952503 systemd-logind[1588]: New session 64 of user core. Jan 20 02:55:58.013027 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:55:59.199572 sshd[7427]: Connection closed by 10.0.0.1 port 41474 Jan 20 02:55:59.205933 sshd-session[7411]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:59.255500 systemd[1]: sshd@62-10.0.0.117:22-10.0.0.1:41474.service: Deactivated successfully. Jan 20 02:55:59.289985 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:55:59.336507 systemd-logind[1588]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:55:59.347460 systemd-logind[1588]: Removed session 64. Jan 20 02:56:04.319020 systemd[1]: Started sshd@63-10.0.0.117:22-10.0.0.1:41490.service - OpenSSH per-connection server daemon (10.0.0.1:41490). Jan 20 02:56:04.972489 sshd[7464]: Accepted publickey for core from 10.0.0.1 port 41490 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:56:04.975635 sshd-session[7464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:05.024649 systemd-logind[1588]: New session 65 of user core. Jan 20 02:56:05.096750 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:56:05.928775 sshd[7468]: Connection closed by 10.0.0.1 port 41490 Jan 20 02:56:05.931386 sshd-session[7464]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:05.987159 systemd-logind[1588]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:56:05.991036 systemd[1]: sshd@63-10.0.0.117:22-10.0.0.1:41490.service: Deactivated successfully. Jan 20 02:56:06.013062 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:56:06.033481 systemd-logind[1588]: Removed session 65. Jan 20 02:56:11.053700 systemd[1]: Started sshd@64-10.0.0.117:22-10.0.0.1:40694.service - OpenSSH per-connection server daemon (10.0.0.1:40694). Jan 20 02:56:11.474697 sshd[7502]: Accepted publickey for core from 10.0.0.1 port 40694 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:56:11.497411 sshd-session[7502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:11.553105 systemd-logind[1588]: New session 66 of user core. Jan 20 02:56:11.577406 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:56:13.001628 sshd[7509]: Connection closed by 10.0.0.1 port 40694 Jan 20 02:56:13.006174 sshd-session[7502]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:13.098127 systemd[1]: sshd@64-10.0.0.117:22-10.0.0.1:40694.service: Deactivated successfully. Jan 20 02:56:13.150734 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:56:13.201484 systemd-logind[1588]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:56:13.212438 systemd-logind[1588]: Removed session 66.