Jan 28 01:10:01.348135 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:22:24 -00 2026 Jan 28 01:10:01.348175 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:10:01.348192 kernel: BIOS-provided physical RAM map: Jan 28 01:10:01.348201 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 01:10:01.348209 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 01:10:01.348218 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 01:10:01.348229 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 01:10:01.348238 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 01:10:01.348304 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:10:01.348315 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 01:10:01.348328 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:10:01.348337 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 01:10:01.348347 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:10:01.348356 kernel: NX (Execute Disable) protection: active Jan 28 01:10:01.348367 kernel: APIC: Static calls initialized Jan 28 01:10:01.348380 kernel: SMBIOS 2.8 present. Jan 28 01:10:01.348435 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 01:10:01.348446 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:10:01.348455 kernel: Hypervisor detected: KVM Jan 28 01:10:01.348465 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:10:01.348475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:10:01.348485 kernel: kvm-clock: using sched offset of 12850435429 cycles Jan 28 01:10:01.348496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:10:01.348507 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:10:01.348521 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:10:01.348532 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:10:01.348542 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:10:01.348553 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 01:10:01.348563 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:10:01.348574 kernel: Using GB pages for direct mapping Jan 28 01:10:01.348584 kernel: ACPI: Early table checksum verification disabled Jan 28 01:10:01.348598 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 01:10:01.348610 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348620 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348630 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348639 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 01:10:01.348650 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348660 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348675 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348686 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:10:01.348701 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 01:10:01.348712 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 01:10:01.348723 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 01:10:01.348737 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 01:10:01.348748 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 01:10:01.348759 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 01:10:01.348770 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 01:10:01.348780 kernel: No NUMA configuration found Jan 28 01:10:01.348792 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 01:10:01.348802 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 28 01:10:01.348816 kernel: Zone ranges: Jan 28 01:10:01.348827 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:10:01.348838 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 01:10:01.348848 kernel: Normal empty Jan 28 01:10:01.348859 kernel: Device empty Jan 28 01:10:01.348870 kernel: Movable zone start for each node Jan 28 01:10:01.348880 kernel: Early memory node ranges Jan 28 01:10:01.348894 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 01:10:01.348905 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 01:10:01.349029 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 01:10:01.349216 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:10:01.349232 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 01:10:01.349306 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 01:10:01.349322 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:10:01.349335 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:10:01.349353 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:10:01.349364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:10:01.349437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:10:01.349451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:10:01.349464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:10:01.349476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:10:01.349489 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:10:01.349506 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:10:01.349518 kernel: TSC deadline timer available Jan 28 01:10:01.349530 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:10:01.349542 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:10:01.349555 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:10:01.349569 kernel: CPU topo: Max. threads per core: 1 Jan 28 01:10:01.349582 kernel: CPU topo: Num. cores per package: 4 Jan 28 01:10:01.349599 kernel: CPU topo: Num. threads per package: 4 Jan 28 01:10:01.349609 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 01:10:01.349620 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:10:01.349630 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:10:01.349641 kernel: kvm-guest: setup PV sched yield Jan 28 01:10:01.349653 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 01:10:01.349663 kernel: Booting paravirtualized kernel on KVM Jan 28 01:10:01.349675 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:10:01.349689 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:10:01.349700 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 01:10:01.349712 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 01:10:01.349725 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:10:01.349736 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:10:01.349746 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:10:01.349758 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:10:01.349773 kernel: random: crng init done Jan 28 01:10:01.349786 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:10:01.349800 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:10:01.349812 kernel: Fallback order for Node 0: 0 Jan 28 01:10:01.349824 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 28 01:10:01.349837 kernel: Policy zone: DMA32 Jan 28 01:10:01.349855 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:10:01.349870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:10:01.349884 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 01:10:01.349897 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:10:01.349910 kernel: Dynamic Preempt: voluntary Jan 28 01:10:01.350287 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:10:01.350300 kernel: rcu: RCU event tracing is enabled. Jan 28 01:10:01.350312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:10:01.350328 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:10:01.350395 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:10:01.350407 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:10:01.350418 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:10:01.350429 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:10:01.350441 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:10:01.350452 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:10:01.350467 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:10:01.350478 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:10:01.350491 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:10:01.350511 kernel: Console: colour VGA+ 80x25 Jan 28 01:10:01.350526 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:10:01.350537 kernel: ACPI: Core revision 20240827 Jan 28 01:10:01.350549 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:10:01.350560 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:10:01.350571 kernel: x2apic enabled Jan 28 01:10:01.350586 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:10:01.350650 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:10:01.350663 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:10:01.350674 kernel: kvm-guest: setup PV IPIs Jan 28 01:10:01.350689 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:10:01.350702 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:10:01.350714 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:10:01.350726 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:10:01.350737 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:10:01.350749 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:10:01.350761 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:10:01.350775 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:10:01.350787 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:10:01.350798 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:10:01.350810 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:10:01.350822 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:10:01.350834 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:10:01.351258 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:10:01.351276 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:10:01.351288 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:10:01.351299 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:10:01.351311 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:10:01.351323 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:10:01.351335 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:10:01.351347 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:10:01.351361 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:10:01.351372 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:10:01.351384 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:10:01.351398 kernel: landlock: Up and running. Jan 28 01:10:01.351409 kernel: SELinux: Initializing. Jan 28 01:10:01.351419 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:10:01.351430 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:10:01.351502 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:10:01.351515 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:10:01.351527 kernel: signal: max sigframe size: 1776 Jan 28 01:10:01.351538 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:10:01.351550 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:10:01.351562 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:10:01.351574 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:10:01.351589 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:10:01.351600 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:10:01.351612 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:10:01.351623 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:10:01.351635 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:10:01.351647 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120524K reserved, 0K cma-reserved) Jan 28 01:10:01.351658 kernel: devtmpfs: initialized Jan 28 01:10:01.351673 kernel: x86/mm: Memory block size: 128MB Jan 28 01:10:01.351684 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:10:01.351696 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:10:01.351708 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:10:01.351720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:10:01.351731 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:10:01.351743 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:10:01.351757 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:10:01.351769 kernel: audit: type=2000 audit(1769562580.361:1): state=initialized audit_enabled=0 res=1 Jan 28 01:10:01.351780 kernel: cpuidle: using governor menu Jan 28 01:10:01.351838 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:10:01.351851 kernel: dca service started, version 1.12.1 Jan 28 01:10:01.351865 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 01:10:01.351879 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:10:01.351899 kernel: PCI: Using configuration type 1 for base access Jan 28 01:10:01.352180 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:10:01.352196 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:10:01.352209 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:10:01.352220 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:10:01.352232 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:10:01.352244 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:10:01.352260 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:10:01.352271 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:10:01.352283 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:10:01.352294 kernel: ACPI: Interpreter enabled Jan 28 01:10:01.352306 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:10:01.352320 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:10:01.352331 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:10:01.352345 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:10:01.352357 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:10:01.352368 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:10:01.352716 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:10:01.353280 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:10:01.353526 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:10:01.353548 kernel: PCI host bridge to bus 0000:00 Jan 28 01:10:01.353857 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:10:01.354416 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:10:01.354631 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:10:01.354832 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:10:01.355347 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:10:01.355565 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 01:10:01.355909 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:10:01.372513 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:10:01.372803 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:10:01.373786 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 28 01:10:01.374608 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 28 01:10:01.375225 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 28 01:10:01.375513 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:10:01.375815 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 01:10:01.376279 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 01:10:01.376568 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 28 01:10:01.377286 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 01:10:01.377599 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 01:10:01.377887 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 28 01:10:01.380204 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 28 01:10:01.380456 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 01:10:01.380768 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 01:10:01.381223 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 28 01:10:01.381476 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 28 01:10:01.381720 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 01:10:01.382355 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 28 01:10:01.382663 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:10:01.384894 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:10:01.385435 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 42968 usecs Jan 28 01:10:01.385716 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 01:10:01.386474 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 28 01:10:01.386739 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 28 01:10:01.389578 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 01:10:01.390143 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 01:10:01.390172 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:10:01.390187 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:10:01.390200 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:10:01.390213 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:10:01.390226 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:10:01.390247 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:10:01.390260 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:10:01.390273 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:10:01.390288 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:10:01.390302 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:10:01.390316 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:10:01.390331 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:10:01.390344 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:10:01.390364 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:10:01.390379 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:10:01.390391 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:10:01.390402 kernel: iommu: Default domain type: Translated Jan 28 01:10:01.390415 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:10:01.390427 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:10:01.390440 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:10:01.390456 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 01:10:01.390469 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 01:10:01.390749 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:10:01.392301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:10:01.392875 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:10:01.393275 kernel: vgaarb: loaded Jan 28 01:10:01.393294 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:10:01.393307 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:10:01.393319 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:10:01.393330 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:10:01.393342 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:10:01.393354 kernel: pnp: PnP ACPI init Jan 28 01:10:01.394807 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:10:01.394834 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:10:01.394848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:10:01.394863 kernel: NET: Registered PF_INET protocol family Jan 28 01:10:01.394877 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:10:01.394891 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:10:01.394905 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:10:01.395033 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:10:01.395120 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:10:01.395133 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:10:01.395146 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:10:01.395159 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:10:01.395173 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:10:01.395186 kernel: NET: Registered PF_XDP protocol family Jan 28 01:10:01.395446 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:10:01.395698 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:10:01.397217 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:10:01.397493 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:10:01.397827 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:10:01.399330 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 01:10:01.399352 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:10:01.399372 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:10:01.399385 kernel: Initialise system trusted keyrings Jan 28 01:10:01.399398 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:10:01.399411 kernel: Key type asymmetric registered Jan 28 01:10:01.399423 kernel: Asymmetric key parser 'x509' registered Jan 28 01:10:01.399435 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:10:01.399448 kernel: io scheduler mq-deadline registered Jan 28 01:10:01.399464 kernel: io scheduler kyber registered Jan 28 01:10:01.399542 kernel: io scheduler bfq registered Jan 28 01:10:01.399557 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:10:01.399571 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:10:01.399584 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:10:01.399597 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:10:01.399609 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:10:01.399622 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:10:01.399696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:10:01.399711 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:10:01.399725 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:10:01.400228 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:10:01.400475 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:10:01.400493 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 28 01:10:01.400735 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:09:49 UTC (1769562589) Jan 28 01:10:01.407686 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:10:01.407716 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:10:01.407729 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:10:01.407740 kernel: Segment Routing with IPv6 Jan 28 01:10:01.407750 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:10:01.407763 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:10:01.407788 kernel: Key type dns_resolver registered Jan 28 01:10:01.407800 kernel: IPI shorthand broadcast: enabled Jan 28 01:10:01.407811 kernel: sched_clock: Marking stable (8062042695, 599468782)->(9830272765, -1168761288) Jan 28 01:10:01.407821 kernel: registered taskstats version 1 Jan 28 01:10:01.407832 kernel: Loading compiled-in X.509 certificates Jan 28 01:10:01.407842 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 0eb3c2aae9988d4ab7f0e142c4f5c61453c9ddb3' Jan 28 01:10:01.407852 kernel: Demotion targets for Node 0: null Jan 28 01:10:01.407870 kernel: Key type .fscrypt registered Jan 28 01:10:01.407881 kernel: Key type fscrypt-provisioning registered Jan 28 01:10:01.407891 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:10:01.407902 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:10:01.410656 kernel: ima: No architecture policies found Jan 28 01:10:01.410679 kernel: clk: Disabling unused clocks Jan 28 01:10:01.410692 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:10:01.410710 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:10:01.410723 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 01:10:01.410735 kernel: Run /init as init process Jan 28 01:10:01.410748 kernel: with arguments: Jan 28 01:10:01.410760 kernel: /init Jan 28 01:10:01.410773 kernel: with environment: Jan 28 01:10:01.410785 kernel: HOME=/ Jan 28 01:10:01.410801 kernel: TERM=linux Jan 28 01:10:01.410813 kernel: SCSI subsystem initialized Jan 28 01:10:01.410826 kernel: libata version 3.00 loaded. Jan 28 01:10:01.411447 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:10:01.411473 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:10:01.411700 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 01:10:01.416647 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 01:10:01.416899 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:10:01.417627 kernel: scsi host0: ahci Jan 28 01:10:01.417855 kernel: scsi host1: ahci Jan 28 01:10:01.424883 kernel: scsi host2: ahci Jan 28 01:10:01.426740 kernel: scsi host3: ahci Jan 28 01:10:01.429407 kernel: scsi host4: ahci Jan 28 01:10:01.429763 kernel: scsi host5: ahci Jan 28 01:10:01.429788 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 28 01:10:01.429809 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 28 01:10:01.429822 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 28 01:10:01.429835 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 28 01:10:01.429854 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 28 01:10:01.429869 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 28 01:10:01.429884 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:10:01.429896 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:10:01.430031 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:10:01.430114 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:10:01.430129 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:10:01.430148 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:10:01.430160 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:10:01.430172 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:10:01.430185 kernel: ata3.00: applying bridge limits Jan 28 01:10:01.430198 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:10:01.430211 kernel: ata3.00: configured for UDMA/100 Jan 28 01:10:01.432903 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:10:01.434551 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:10:01.434794 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 28 01:10:01.434813 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:10:01.435321 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:10:01.435349 kernel: GPT:16515071 != 27000831 Jan 28 01:10:01.435365 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:10:01.435388 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:10:01.435402 kernel: GPT:16515071 != 27000831 Jan 28 01:10:01.435416 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:10:01.435431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:10:01.435741 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:10:01.435766 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:10:01.435781 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:10:01.435801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:10:01.435816 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:10:01.435829 kernel: raid6: avx2x4 gen() 19187 MB/s Jan 28 01:10:01.435843 kernel: raid6: avx2x2 gen() 4221 MB/s Jan 28 01:10:01.435858 kernel: raid6: avx2x1 gen() 7898 MB/s Jan 28 01:10:01.435871 kernel: raid6: using algorithm avx2x4 gen() 19187 MB/s Jan 28 01:10:01.435886 kernel: raid6: .... xor() 2682 MB/s, rmw enabled Jan 28 01:10:01.435905 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:10:01.439242 kernel: xor: automatically using best checksumming function avx Jan 28 01:10:01.439262 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:10:01.439287 kernel: BTRFS: device fsid 0f5fa021-4357-40bb-b32a-e1579c5824ad devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (181) Jan 28 01:10:01.439301 kernel: BTRFS info (device dm-0): first mount of filesystem 0f5fa021-4357-40bb-b32a-e1579c5824ad Jan 28 01:10:01.439321 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:10:01.439336 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:10:01.439350 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:10:01.439365 kernel: loop: module loaded Jan 28 01:10:01.439379 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:10:01.439395 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:10:01.439412 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:10:01.439437 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:10:01.439455 systemd[1]: Detected virtualization kvm. Jan 28 01:10:01.439470 systemd[1]: Detected architecture x86-64. Jan 28 01:10:01.439482 systemd[1]: Running in initrd. Jan 28 01:10:01.439495 systemd[1]: No hostname configured, using default hostname. Jan 28 01:10:01.439512 systemd[1]: Hostname set to . Jan 28 01:10:01.439524 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:10:01.439537 kernel: hrtimer: interrupt took 4421655 ns Jan 28 01:10:01.439550 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:10:01.439564 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:10:01.439578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:10:01.439591 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:10:01.439609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:10:01.439622 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:10:01.439635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:10:01.439648 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:10:01.439661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:10:01.439677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:10:01.439690 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:10:01.439703 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:10:01.439716 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:10:01.439729 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:10:01.439742 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:10:01.439757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:10:01.439777 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:10:01.439792 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:10:01.439808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:10:01.439823 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:10:01.439838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:10:01.439853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:10:01.439868 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:10:01.439890 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:10:01.439906 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:10:01.443684 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:10:01.443746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:10:01.443759 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:10:01.443774 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:10:01.443787 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:10:01.447574 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:10:01.447590 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:10:01.447605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:10:01.447625 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:10:01.448163 systemd-journald[318]: Collecting audit messages is enabled. Jan 28 01:10:01.448287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:10:01.448310 kernel: audit: type=1130 audit(1769562601.294:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.448327 kernel: audit: type=1130 audit(1769562601.369:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.448341 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:10:01.448356 kernel: audit: type=1130 audit(1769562601.396:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.448370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:10:01.448386 systemd-journald[318]: Journal started Jan 28 01:10:01.448417 systemd-journald[318]: Runtime Journal (/run/log/journal/c78d55f2e3f64647bfb119240be6cfa0) is 6M, max 48.2M, 42.1M free. Jan 28 01:10:01.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.494803 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:10:01.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.578034 kernel: audit: type=1130 audit(1769562601.499:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.624113 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:10:01.849698 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:10:01.972854 kernel: audit: type=1130 audit(1769562601.886:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:01.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:02.002647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:10:03.365858 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:10:03.374389 kernel: Bridge firewalling registered Jan 28 01:10:02.138637 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:10:02.387630 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 28 01:10:03.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.451332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:10:03.553580 kernel: audit: type=1130 audit(1769562603.463:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.634454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:10:03.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.763249 kernel: audit: type=1130 audit(1769562603.664:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.773377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:10:03.903325 kernel: audit: type=1130 audit(1769562603.800:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.844250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:10:03.911897 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:10:03.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:03.992340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:10:04.065513 kernel: audit: type=1130 audit(1769562603.991:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:04.593124 kernel: audit: type=1130 audit(1769562604.548:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:04.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:04.553000 audit: BPF prog-id=6 op=LOAD Jan 28 01:10:04.543569 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:10:04.555731 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:10:04.624640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:10:04.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:04.678571 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:10:04.852388 dracut-cmdline[358]: dracut-109 Jan 28 01:10:04.895583 dracut-cmdline[358]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:10:05.090428 systemd-resolved[353]: Positive Trust Anchors: Jan 28 01:10:05.111301 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:10:05.111321 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:10:05.111515 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:10:05.544901 systemd-resolved[353]: Defaulting to hostname 'linux'. Jan 28 01:10:05.577375 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:10:05.649507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:10:05.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:06.777621 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:10:06.899513 kernel: iscsi: registered transport (tcp) Jan 28 01:10:07.093260 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:10:07.093880 kernel: QLogic iSCSI HBA Driver Jan 28 01:10:07.477325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:10:07.823151 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:10:07.862691 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:10:07.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:07.947312 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 28 01:10:07.947456 kernel: audit: type=1130 audit(1769562607.847:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:08.602433 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:10:08.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:08.682492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:10:08.761419 kernel: audit: type=1130 audit(1769562608.655:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:08.779409 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:10:08.979436 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:10:09.029902 kernel: audit: type=1130 audit(1769562608.995:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:09.030315 kernel: audit: type=1334 audit(1769562608.996:18): prog-id=7 op=LOAD Jan 28 01:10:09.030337 kernel: audit: type=1334 audit(1769562608.996:19): prog-id=8 op=LOAD Jan 28 01:10:08.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:08.996000 audit: BPF prog-id=7 op=LOAD Jan 28 01:10:08.996000 audit: BPF prog-id=8 op=LOAD Jan 28 01:10:09.005847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:10:09.430826 systemd-udevd[577]: Using default interface naming scheme 'v257'. Jan 28 01:10:09.631769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:10:09.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:09.735741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:10:09.849152 kernel: audit: type=1130 audit(1769562609.689:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:10.268844 dracut-pre-trigger[618]: rd.md=0: removing MD RAID activation Jan 28 01:10:10.962400 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:10:11.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.038143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:10:11.079181 kernel: audit: type=1130 audit(1769562611.010:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.079276 kernel: audit: type=1130 audit(1769562611.072:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.056383 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:10:11.134862 kernel: audit: type=1334 audit(1769562611.081:23): prog-id=9 op=LOAD Jan 28 01:10:11.081000 audit: BPF prog-id=9 op=LOAD Jan 28 01:10:11.085689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:10:11.564670 systemd-networkd[723]: lo: Link UP Jan 28 01:10:11.583065 systemd-networkd[723]: lo: Gained carrier Jan 28 01:10:11.593731 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:10:11.661483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:10:11.826133 kernel: audit: type=1130 audit(1769562611.660:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:11.778775 systemd[1]: Reached target network.target - Network. Jan 28 01:10:11.836355 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:10:12.068782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:10:12.208336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:10:12.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:12.235055 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:10:12.344401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:10:12.490483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:10:12.575479 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:10:12.591787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:10:12.602348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:10:12.684484 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:10:12.842680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:10:12.898308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:10:12.898544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:10:12.984166 disk-uuid[767]: Primary Header is updated. Jan 28 01:10:12.984166 disk-uuid[767]: Secondary Entries is updated. Jan 28 01:10:12.984166 disk-uuid[767]: Secondary Header is updated. Jan 28 01:10:13.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:13.129067 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:10:13.355531 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:10:13.358414 kernel: audit: type=1131 audit(1769562613.126:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:13.358442 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:10:13.317641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:10:13.794150 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:10:13.891052 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:10:15.041639 kernel: AES CTR mode by8 optimization enabled Jan 28 01:10:15.041810 kernel: audit: type=1130 audit(1769562614.902:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:15.041833 kernel: audit: type=1130 audit(1769562614.903:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:15.041866 kernel: audit: type=1131 audit(1769562614.903:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:14.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:14.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:14.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:14.044412 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:10:15.077748 disk-uuid[768]: Warning: The kernel is still using the old partition table. Jan 28 01:10:15.077748 disk-uuid[768]: The new table will be used at the next reboot or after you Jan 28 01:10:15.077748 disk-uuid[768]: run partprobe(8) or kpartx(8) Jan 28 01:10:15.077748 disk-uuid[768]: The operation has completed successfully. Jan 28 01:10:14.044421 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:10:15.220723 kernel: audit: type=1130 audit(1769562615.126:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:15.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:14.053494 systemd-networkd[723]: eth0: Link UP Jan 28 01:10:14.056215 systemd-networkd[723]: eth0: Gained carrier Jan 28 01:10:14.056237 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:10:14.102404 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:10:14.903891 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:10:14.904365 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:10:15.104528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:10:15.292291 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:10:15.574657 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (855) Jan 28 01:10:15.596063 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:10:15.596218 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:10:15.707811 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:10:15.707897 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:10:15.713595 systemd-networkd[723]: eth0: Gained IPv6LL Jan 28 01:10:15.807755 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:10:15.865081 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:10:15.931758 kernel: audit: type=1130 audit(1769562615.880:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:15.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:15.894338 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:10:17.033221 ignition[874]: Ignition 2.24.0 Jan 28 01:10:17.033241 ignition[874]: Stage: fetch-offline Jan 28 01:10:17.033317 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:10:17.033342 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:10:17.033585 ignition[874]: parsed url from cmdline: "" Jan 28 01:10:17.033591 ignition[874]: no config URL provided Jan 28 01:10:17.033859 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:10:17.033878 ignition[874]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:10:17.043071 ignition[874]: op(1): [started] loading QEMU firmware config module Jan 28 01:10:17.043081 ignition[874]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:10:17.284697 ignition[874]: op(1): [finished] loading QEMU firmware config module Jan 28 01:10:17.835570 ignition[874]: parsing config with SHA512: 0d1839ba4791a5c9d172645f5f23e38e7d019d8e4dc60d304a6c88550913a1a50fa9c99f14e6bd2c192e927b7a21ffb906fd5960bbb213aad2f597fa21938e8a Jan 28 01:10:17.871495 unknown[874]: fetched base config from "system" Jan 28 01:10:17.871522 unknown[874]: fetched user config from "qemu" Jan 28 01:10:17.872708 ignition[874]: fetch-offline: fetch-offline passed Jan 28 01:10:17.872850 ignition[874]: Ignition finished successfully Jan 28 01:10:17.926696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:10:17.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:17.955652 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:10:18.077489 kernel: audit: type=1130 audit(1769562617.952:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:17.965886 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:10:18.242666 ignition[884]: Ignition 2.24.0 Jan 28 01:10:18.244450 ignition[884]: Stage: kargs Jan 28 01:10:18.244663 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:10:18.244677 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:10:18.246327 ignition[884]: kargs: kargs passed Jan 28 01:10:18.246393 ignition[884]: Ignition finished successfully Jan 28 01:10:18.345289 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:10:18.439699 kernel: audit: type=1130 audit(1769562618.353:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:18.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:18.366215 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:10:23.651895 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4276004970 wd_nsec: 4276004842 Jan 28 01:10:23.681371 ignition[891]: Ignition 2.24.0 Jan 28 01:10:23.685685 ignition[891]: Stage: disks Jan 28 01:10:23.773409 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:10:23.688499 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:10:23.930896 kernel: audit: type=1130 audit(1769562623.840:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:23.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:23.688521 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:10:23.699794 ignition[891]: disks: disks passed Jan 28 01:10:23.937634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:10:23.700483 ignition[891]: Ignition finished successfully Jan 28 01:10:23.996520 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:10:24.034784 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:10:24.055841 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:10:24.170618 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:10:24.327770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:10:25.637366 systemd-fsck[900]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 28 01:10:25.681594 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:10:25.854832 kernel: audit: type=1130 audit(1769562625.762:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:25.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:25.788512 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:10:27.568348 kernel: EXT4-fs (vda9): mounted filesystem 60a46795-cc10-4076-a709-d039d1c23a6b r/w with ordered data mode. Quota mode: none. Jan 28 01:10:27.578824 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:10:27.615375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:10:27.648250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:10:27.664513 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:10:27.687588 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:10:27.687663 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:10:27.979582 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (909) Jan 28 01:10:27.687706 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:10:27.790333 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:10:28.097866 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:10:28.100449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:10:27.855655 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:10:28.258216 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:10:28.258747 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:10:28.279279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:10:29.892601 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:10:29.987363 kernel: audit: type=1130 audit(1769562629.939:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:29.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:29.972238 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:10:30.012379 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:10:30.192801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:10:30.245841 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:10:30.414680 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:10:30.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:30.494627 kernel: audit: type=1130 audit(1769562630.453:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:30.531272 ignition[1007]: INFO : Ignition 2.24.0 Jan 28 01:10:30.531272 ignition[1007]: INFO : Stage: mount Jan 28 01:10:30.570065 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:10:30.570065 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:10:30.570065 ignition[1007]: INFO : mount: mount passed Jan 28 01:10:30.570065 ignition[1007]: INFO : Ignition finished successfully Jan 28 01:10:30.595852 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:10:30.751407 kernel: audit: type=1130 audit(1769562630.630:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:30.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:30.639383 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:10:31.027425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:10:31.194753 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Jan 28 01:10:31.245796 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:10:31.246676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:10:31.459283 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:10:31.474117 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:10:31.624411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:10:32.482650 ignition[1036]: INFO : Ignition 2.24.0 Jan 28 01:10:32.482650 ignition[1036]: INFO : Stage: files Jan 28 01:10:32.555634 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:10:32.555634 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:10:32.555634 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:10:32.636692 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:10:32.636692 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:10:32.763429 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:10:32.793666 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:10:32.793666 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:10:32.779722 unknown[1036]: wrote ssh authorized keys file for user: core Jan 28 01:10:32.890292 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:10:32.890292 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 01:10:33.172278 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:10:34.397613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:10:34.397613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:10:34.470623 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:10:35.453307 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:10:48.241615 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:10:48.241615 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 01:10:48.345497 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:10:48.997514 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:10:49.043640 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:10:49.043640 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:10:49.043640 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:10:49.043640 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:10:49.139777 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:10:49.139777 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:10:49.139777 ignition[1036]: INFO : files: files passed Jan 28 01:10:49.139777 ignition[1036]: INFO : Ignition finished successfully Jan 28 01:10:49.309397 kernel: audit: type=1130 audit(1769562649.177:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.104875 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:10:49.180862 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:10:49.322891 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:10:49.449782 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:10:49.453790 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:10:49.624404 kernel: audit: type=1130 audit(1769562649.529:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.624550 kernel: audit: type=1131 audit(1769562649.532:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.626399 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:10:49.658712 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:10:49.658712 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:10:49.740374 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:10:49.755587 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:10:49.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:49.797582 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:10:49.831503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:10:49.920544 kernel: audit: type=1130 audit(1769562649.796:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:50.275147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:10:50.275576 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:10:50.467857 kernel: audit: type=1130 audit(1769562650.287:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:50.468450 kernel: audit: type=1131 audit(1769562650.287:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:50.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:50.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:50.288454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:10:50.522846 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:10:50.570852 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:10:50.601507 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:10:51.150406 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:10:51.289602 kernel: audit: type=1130 audit(1769562651.187:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:51.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:51.229768 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:10:51.456799 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:10:51.487752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:10:51.535432 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:10:51.555742 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:10:51.664834 kernel: audit: type=1131 audit(1769562651.591:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:51.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:51.590780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:10:51.591394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:10:51.665543 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:10:51.684419 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:10:51.735352 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:10:51.801832 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:10:51.849204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:10:51.923907 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:10:51.950356 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:10:52.008798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:10:52.116351 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:10:52.141610 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:10:52.188876 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:10:52.300600 kernel: audit: type=1131 audit(1769562652.248:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:52.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:52.220865 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:10:52.221668 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:10:52.334635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:10:52.368203 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:10:52.439382 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:10:52.441857 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:10:52.670171 kernel: audit: type=1131 audit(1769562652.552:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:52.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:52.456135 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:10:52.458618 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:10:52.687454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:10:52.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:52.687859 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:10:52.740887 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:10:52.791339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:10:52.799426 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:10:52.886552 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:10:52.925160 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:10:52.948841 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:10:52.949504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:10:53.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.046774 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:10:53.047197 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:10:53.049228 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:10:53.049567 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:10:53.127339 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:10:53.127712 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:10:53.147807 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:10:53.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.148446 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:10:53.238730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:10:53.270491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:10:53.270890 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:10:53.543089 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:10:53.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.555802 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:10:53.556352 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:10:53.574803 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:10:53.575204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:10:53.592790 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:10:53.593427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:10:53.778184 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:10:53.817106 ignition[1094]: INFO : Ignition 2.24.0 Jan 28 01:10:53.817106 ignition[1094]: INFO : Stage: umount Jan 28 01:10:53.817106 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:10:53.817106 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:10:53.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:53.992405 ignition[1094]: INFO : umount: umount passed Jan 28 01:10:53.992405 ignition[1094]: INFO : Ignition finished successfully Jan 28 01:10:53.821795 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:10:53.822377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:10:53.879345 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:10:53.880782 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:10:53.896142 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:10:53.896630 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:10:53.996702 systemd[1]: Stopped target network.target - Network. Jan 28 01:10:54.167808 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:10:54.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.168225 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:10:54.377198 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 01:10:54.377395 kernel: audit: type=1131 audit(1769562654.188:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.377424 kernel: audit: type=1131 audit(1769562654.262:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.188557 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:10:54.456401 kernel: audit: type=1131 audit(1769562654.398:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.188660 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:10:54.560427 kernel: audit: type=1131 audit(1769562654.479:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.265429 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:10:54.702486 kernel: audit: type=1131 audit(1769562654.590:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.265649 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:10:54.400816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:10:54.401367 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:10:54.480707 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:10:54.480826 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:10:54.600110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:10:54.652124 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:10:55.014873 kernel: audit: type=1131 audit(1769562654.880:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:54.834344 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:10:55.080683 kernel: audit: type=1334 audit(1769562655.014:67): prog-id=6 op=UNLOAD Jan 28 01:10:55.014000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:10:54.834639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:10:55.035658 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:10:55.036215 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:10:55.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.291166 kernel: audit: type=1131 audit(1769562655.242:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.326000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:10:55.335452 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:10:55.375904 kernel: audit: type=1334 audit(1769562655.326:69): prog-id=9 op=UNLOAD Jan 28 01:10:55.429733 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:10:55.430806 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:10:55.501199 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:10:55.677811 kernel: audit: type=1131 audit(1769562655.566:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.535650 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:10:55.535788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:10:55.567450 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:10:55.567555 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:10:55.625576 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:10:55.625683 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:10:55.645562 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:10:55.934862 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:10:55.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:55.935563 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:10:55.975884 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:10:55.976153 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:10:56.022207 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:10:56.024505 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:10:56.138228 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:10:56.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.138473 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:10:56.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.165458 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:10:56.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.165568 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:10:56.186454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:10:56.186561 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:10:56.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.260632 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:10:56.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:10:56.276816 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:10:56.277121 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:10:56.299799 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:10:56.299900 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:10:56.333440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:10:56.333538 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:10:56.335453 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:10:56.409797 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:10:56.464769 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:10:56.465225 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:10:56.526153 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:10:56.648819 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:10:57.024379 systemd[1]: Switching root. Jan 28 01:10:57.203509 systemd-journald[318]: Journal stopped Jan 28 01:11:11.126430 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Jan 28 01:11:11.127836 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:11:11.127868 kernel: SELinux: policy capability open_perms=1 Jan 28 01:11:11.127887 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:11:11.127908 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:11:11.128155 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:11:11.128176 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:11:11.128192 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:11:11.129209 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:11:11.129234 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:11:11.129275 systemd[1]: Successfully loaded SELinux policy in 963.679ms. Jan 28 01:11:11.129434 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 40.763ms. Jan 28 01:11:11.129458 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:11:11.129480 systemd[1]: Detected virtualization kvm. Jan 28 01:11:11.129497 systemd[1]: Detected architecture x86-64. Jan 28 01:11:11.129630 systemd[1]: Detected first boot. Jan 28 01:11:11.129653 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:11:11.129670 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 28 01:11:11.129690 kernel: audit: type=1334 audit(1769562659.428:85): prog-id=10 op=LOAD Jan 28 01:11:11.129714 kernel: audit: type=1334 audit(1769562659.429:86): prog-id=10 op=UNLOAD Jan 28 01:11:11.129739 kernel: audit: type=1334 audit(1769562659.435:87): prog-id=11 op=LOAD Jan 28 01:11:11.129760 kernel: audit: type=1334 audit(1769562659.435:88): prog-id=11 op=UNLOAD Jan 28 01:11:11.129887 zram_generator::config[1138]: No configuration found. Jan 28 01:11:11.129907 kernel: Guest personality initialized and is inactive Jan 28 01:11:11.130130 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 01:11:11.130148 kernel: Initialized host personality Jan 28 01:11:11.130168 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:11:11.130186 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:11:11.130203 kernel: audit: type=1334 audit(1769562664.655:89): prog-id=12 op=LOAD Jan 28 01:11:11.130444 kernel: audit: type=1334 audit(1769562664.655:90): prog-id=3 op=UNLOAD Jan 28 01:11:11.130466 kernel: audit: type=1334 audit(1769562664.655:91): prog-id=13 op=LOAD Jan 28 01:11:11.130485 kernel: audit: type=1334 audit(1769562664.655:92): prog-id=14 op=LOAD Jan 28 01:11:11.130501 kernel: audit: type=1334 audit(1769562664.655:93): prog-id=4 op=UNLOAD Jan 28 01:11:11.130517 kernel: audit: type=1334 audit(1769562664.655:94): prog-id=5 op=UNLOAD Jan 28 01:11:11.130538 kernel: audit: type=1131 audit(1769562664.670:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.130666 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:11:11.130697 kernel: audit: type=1334 audit(1769562664.915:96): prog-id=12 op=UNLOAD Jan 28 01:11:11.130718 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:11:11.130736 kernel: audit: type=1130 audit(1769562665.010:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.130753 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:11:11.130778 kernel: audit: type=1131 audit(1769562665.010:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.131124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:11:11.131150 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:11:11.131174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:11:11.131192 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:11:11.131211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:11:11.131233 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:11:11.131547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:11:11.131573 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:11:11.131591 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:11:11.131611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:11:11.131631 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:11:11.131648 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:11:11.131668 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:11:11.131798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:11:11.131818 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:11:11.131837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:11:11.131857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:11:11.131874 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:11:11.131892 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:11:11.132122 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:11:11.132259 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:11:11.132280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:11:11.132297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:11:11.132427 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 01:11:11.132453 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:11:11.132470 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:11:11.132487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:11:11.132508 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:11:11.132636 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:11:11.132655 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:11:11.132672 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 01:11:11.132694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:11:11.132711 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 01:11:11.132730 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 01:11:11.132750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:11:11.132877 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:11:11.132897 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:11:11.133125 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:11:11.133151 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:11:11.133169 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:11:11.133186 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:11:11.133210 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:11:11.133433 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:11:11.133454 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:11:11.133472 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:11:11.133492 systemd[1]: Reached target machines.target - Containers. Jan 28 01:11:11.133512 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:11:11.133529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:11:11.133664 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:11:11.133690 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:11:11.133707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:11:11.133725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:11:11.133746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:11:11.133763 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:11:11.133780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:11:11.133909 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:11:11.134455 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:11:11.134486 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:11:11.134504 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:11:11.134636 kernel: audit: type=1131 audit(1769562669.943:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.134658 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:11:11.134676 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:11:11.134699 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:11:11.134719 kernel: audit: type=1131 audit(1769562670.125:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.134846 kernel: audit: type=1334 audit(1769562670.221:102): prog-id=14 op=UNLOAD Jan 28 01:11:11.134866 kernel: audit: type=1334 audit(1769562670.221:103): prog-id=13 op=UNLOAD Jan 28 01:11:11.134886 kernel: audit: type=1334 audit(1769562670.351:104): prog-id=15 op=LOAD Jan 28 01:11:11.134903 kernel: audit: type=1334 audit(1769562670.389:105): prog-id=16 op=LOAD Jan 28 01:11:11.135131 kernel: audit: type=1334 audit(1769562670.423:106): prog-id=17 op=LOAD Jan 28 01:11:11.135263 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:11:11.135287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:11:11.135422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:11:11.135446 kernel: ACPI: bus type drm_connector registered Jan 28 01:11:11.135464 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:11:11.135485 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:11:11.135502 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:11:11.135630 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:11:11.135655 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:11:11.135731 systemd-journald[1224]: Collecting audit messages is enabled. Jan 28 01:11:11.135869 kernel: audit: type=1305 audit(1769562671.096:107): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:11:11.135894 systemd-journald[1224]: Journal started Jan 28 01:11:11.136141 systemd-journald[1224]: Runtime Journal (/run/log/journal/c78d55f2e3f64647bfb119240be6cfa0) is 6M, max 48.2M, 42.1M free. Jan 28 01:11:07.281000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 01:11:09.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:10.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:10.221000 audit: BPF prog-id=14 op=UNLOAD Jan 28 01:11:10.221000 audit: BPF prog-id=13 op=UNLOAD Jan 28 01:11:10.351000 audit: BPF prog-id=15 op=LOAD Jan 28 01:11:10.389000 audit: BPF prog-id=16 op=LOAD Jan 28 01:11:10.423000 audit: BPF prog-id=17 op=LOAD Jan 28 01:11:11.096000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:11:04.571640 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:11:04.660906 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:11:04.669844 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:11:04.672517 systemd[1]: systemd-journald.service: Consumed 3.912s CPU time. Jan 28 01:11:11.096000 audit[1224]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd404eebc0 a2=4000 a3=0 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:11:11.096000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:11:11.324466 kernel: audit: type=1300 audit(1769562671.096:107): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd404eebc0 a2=4000 a3=0 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:11:11.324557 kernel: audit: type=1327 audit(1769562671.096:107): proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:11:11.346113 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:11:11.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.384621 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:11:11.400219 kernel: fuse: init (API version 7.41) Jan 28 01:11:11.443479 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:11:11.478734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:11:11.530197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:11:11.559456 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:11:11.620901 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:11:11.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.671831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:11:11.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.720526 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:11:11.721150 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:11:11.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.758280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:11:11.758802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:11:11.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.820271 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:11:11.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.822141 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:11:11.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.872747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:11:11.873674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:11:11.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.923806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:11:11.928221 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:11:11.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:11.963245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:11:11.963825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:11:12.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.050613 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:11:12.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.097435 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:11:12.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.179291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:11:12.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.230220 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:11:12.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.286798 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:11:12.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:12.560283 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:11:12.706743 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 01:11:12.775903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:11:12.873818 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:11:12.926671 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:11:12.927299 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:11:13.024752 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:11:13.082842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:11:13.083576 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:11:13.544448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:11:13.592457 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:11:13.627893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:11:13.635824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:11:13.668145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:11:13.703426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:11:13.804434 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:11:13.892799 systemd-journald[1224]: Time spent on flushing to /var/log/journal/c78d55f2e3f64647bfb119240be6cfa0 is 2.066162s for 1159 entries. Jan 28 01:11:13.892799 systemd-journald[1224]: System Journal (/var/log/journal/c78d55f2e3f64647bfb119240be6cfa0) is 8M, max 163.5M, 155.5M free. Jan 28 01:11:16.172903 systemd-journald[1224]: Received client request to flush runtime journal. Jan 28 01:11:16.176659 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 28 01:11:16.176697 kernel: audit: type=1130 audit(1769562676.048:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.176739 kernel: loop1: detected capacity change from 0 to 224512 Jan 28 01:11:16.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:13.869701 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:11:13.945177 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:11:14.001147 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:11:15.961869 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:11:16.062662 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:11:16.264823 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:11:16.304716 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:11:16.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.426170 kernel: audit: type=1130 audit(1769562676.365:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.500864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:11:16.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.601697 kernel: audit: type=1130 audit(1769562676.551:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.602812 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:11:16.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.791492 kernel: audit: type=1130 audit(1769562676.685:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:16.792545 kernel: audit: type=1334 audit(1769562676.744:132): prog-id=18 op=LOAD Jan 28 01:11:16.744000 audit: BPF prog-id=18 op=LOAD Jan 28 01:11:16.827441 kernel: audit: type=1334 audit(1769562676.747:133): prog-id=19 op=LOAD Jan 28 01:11:16.885528 kernel: audit: type=1334 audit(1769562676.747:134): prog-id=20 op=LOAD Jan 28 01:11:16.747000 audit: BPF prog-id=19 op=LOAD Jan 28 01:11:16.747000 audit: BPF prog-id=20 op=LOAD Jan 28 01:11:16.846719 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 01:11:16.936860 kernel: audit: type=1334 audit(1769562676.902:135): prog-id=21 op=LOAD Jan 28 01:11:16.902000 audit: BPF prog-id=21 op=LOAD Jan 28 01:11:18.174804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:11:18.279587 kernel: loop2: detected capacity change from 0 to 50784 Jan 28 01:11:18.274287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:11:18.337255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:11:18.358243 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:11:18.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:18.559648 kernel: audit: type=1130 audit(1769562678.450:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:18.585328 kernel: audit: type=1334 audit(1769562678.560:137): prog-id=22 op=LOAD Jan 28 01:11:18.560000 audit: BPF prog-id=22 op=LOAD Jan 28 01:11:18.564000 audit: BPF prog-id=23 op=LOAD Jan 28 01:11:18.564000 audit: BPF prog-id=24 op=LOAD Jan 28 01:11:18.650751 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 01:11:18.696000 audit: BPF prog-id=25 op=LOAD Jan 28 01:11:18.697000 audit: BPF prog-id=26 op=LOAD Jan 28 01:11:18.697000 audit: BPF prog-id=27 op=LOAD Jan 28 01:11:18.745801 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:11:18.788703 kernel: loop3: detected capacity change from 0 to 111560 Jan 28 01:11:19.291857 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 28 01:11:19.292519 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 28 01:11:19.747249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:11:19.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:20.186149 kernel: loop4: detected capacity change from 0 to 224512 Jan 28 01:11:20.502132 kernel: loop5: detected capacity change from 0 to 50784 Jan 28 01:11:20.588213 kernel: loop6: detected capacity change from 0 to 111560 Jan 28 01:11:20.923295 (sd-merge)[1286]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 28 01:11:21.020760 (sd-merge)[1286]: Merged extensions into '/usr'. Jan 28 01:11:21.064302 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:11:21.070503 systemd[1]: Reloading... Jan 28 01:11:21.199294 systemd-nsresourced[1282]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 01:11:23.069689 zram_generator::config[1328]: No configuration found. Jan 28 01:11:23.190829 systemd-oomd[1275]: No swap; memory pressure usage will be degraded Jan 28 01:11:23.458771 systemd-resolved[1276]: Positive Trust Anchors: Jan 28 01:11:23.458797 systemd-resolved[1276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:11:23.458805 systemd-resolved[1276]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:11:23.458847 systemd-resolved[1276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:11:23.481848 systemd-resolved[1276]: Defaulting to hostname 'linux'. Jan 28 01:11:26.012136 systemd[1]: Reloading finished in 4937 ms. Jan 28 01:11:27.264625 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:11:27.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.361490 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 01:11:27.366516 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 01:11:27.366581 kernel: audit: type=1130 audit(1769562687.350:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.458789 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 01:11:27.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.531163 kernel: audit: type=1130 audit(1769562687.454:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.538205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:11:27.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.582012 kernel: audit: type=1130 audit(1769562687.535:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.600657 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:11:27.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.668162 kernel: audit: type=1130 audit(1769562687.598:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.740198 kernel: audit: type=1130 audit(1769562687.675:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:27.760158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:11:27.838902 systemd[1]: Starting ensure-sysext.service... Jan 28 01:11:27.874825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:11:27.913000 audit: BPF prog-id=28 op=LOAD Jan 28 01:11:27.927276 kernel: audit: type=1334 audit(1769562687.913:149): prog-id=28 op=LOAD Jan 28 01:11:27.913000 audit: BPF prog-id=25 op=UNLOAD Jan 28 01:11:27.913000 audit: BPF prog-id=29 op=LOAD Jan 28 01:11:27.949753 kernel: audit: type=1334 audit(1769562687.913:150): prog-id=25 op=UNLOAD Jan 28 01:11:27.949840 kernel: audit: type=1334 audit(1769562687.913:151): prog-id=29 op=LOAD Jan 28 01:11:27.913000 audit: BPF prog-id=30 op=LOAD Jan 28 01:11:27.974139 kernel: audit: type=1334 audit(1769562687.913:152): prog-id=30 op=LOAD Jan 28 01:11:27.974230 kernel: audit: type=1334 audit(1769562687.913:153): prog-id=26 op=UNLOAD Jan 28 01:11:27.913000 audit: BPF prog-id=26 op=UNLOAD Jan 28 01:11:27.913000 audit: BPF prog-id=27 op=UNLOAD Jan 28 01:11:27.923000 audit: BPF prog-id=31 op=LOAD Jan 28 01:11:27.923000 audit: BPF prog-id=18 op=UNLOAD Jan 28 01:11:27.923000 audit: BPF prog-id=32 op=LOAD Jan 28 01:11:27.923000 audit: BPF prog-id=33 op=LOAD Jan 28 01:11:27.923000 audit: BPF prog-id=19 op=UNLOAD Jan 28 01:11:27.923000 audit: BPF prog-id=20 op=UNLOAD Jan 28 01:11:27.955000 audit: BPF prog-id=34 op=LOAD Jan 28 01:11:27.955000 audit: BPF prog-id=22 op=UNLOAD Jan 28 01:11:27.955000 audit: BPF prog-id=35 op=LOAD Jan 28 01:11:27.955000 audit: BPF prog-id=36 op=LOAD Jan 28 01:11:27.955000 audit: BPF prog-id=23 op=UNLOAD Jan 28 01:11:27.955000 audit: BPF prog-id=24 op=UNLOAD Jan 28 01:11:27.960000 audit: BPF prog-id=37 op=LOAD Jan 28 01:11:27.961000 audit: BPF prog-id=15 op=UNLOAD Jan 28 01:11:27.961000 audit: BPF prog-id=38 op=LOAD Jan 28 01:11:27.961000 audit: BPF prog-id=39 op=LOAD Jan 28 01:11:27.961000 audit: BPF prog-id=16 op=UNLOAD Jan 28 01:11:27.961000 audit: BPF prog-id=17 op=UNLOAD Jan 28 01:11:27.969000 audit: BPF prog-id=40 op=LOAD Jan 28 01:11:27.969000 audit: BPF prog-id=21 op=UNLOAD Jan 28 01:11:28.015324 systemd[1]: Reload requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:11:28.015358 systemd[1]: Reloading... Jan 28 01:11:28.638490 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:11:28.638910 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:11:28.641236 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:11:28.646321 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 28 01:11:28.647804 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 28 01:11:28.707716 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:11:28.707737 systemd-tmpfiles[1365]: Skipping /boot Jan 28 01:11:28.921749 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:11:28.923511 systemd-tmpfiles[1365]: Skipping /boot Jan 28 01:11:29.028731 zram_generator::config[1393]: No configuration found. Jan 28 01:11:30.510755 systemd[1]: Reloading finished in 2494 ms. Jan 28 01:11:30.593786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:11:30.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:30.827000 audit: BPF prog-id=41 op=LOAD Jan 28 01:11:30.827000 audit: BPF prog-id=40 op=UNLOAD Jan 28 01:11:30.827000 audit: BPF prog-id=42 op=LOAD Jan 28 01:11:30.827000 audit: BPF prog-id=37 op=UNLOAD Jan 28 01:11:30.827000 audit: BPF prog-id=43 op=LOAD Jan 28 01:11:30.833000 audit: BPF prog-id=44 op=LOAD Jan 28 01:11:30.833000 audit: BPF prog-id=38 op=UNLOAD Jan 28 01:11:30.833000 audit: BPF prog-id=39 op=UNLOAD Jan 28 01:11:30.833000 audit: BPF prog-id=45 op=LOAD Jan 28 01:11:30.833000 audit: BPF prog-id=31 op=UNLOAD Jan 28 01:11:30.833000 audit: BPF prog-id=46 op=LOAD Jan 28 01:11:30.833000 audit: BPF prog-id=47 op=LOAD Jan 28 01:11:30.833000 audit: BPF prog-id=32 op=UNLOAD Jan 28 01:11:30.833000 audit: BPF prog-id=33 op=UNLOAD Jan 28 01:11:30.839000 audit: BPF prog-id=48 op=LOAD Jan 28 01:11:30.839000 audit: BPF prog-id=34 op=UNLOAD Jan 28 01:11:30.839000 audit: BPF prog-id=49 op=LOAD Jan 28 01:11:30.839000 audit: BPF prog-id=50 op=LOAD Jan 28 01:11:30.839000 audit: BPF prog-id=35 op=UNLOAD Jan 28 01:11:30.839000 audit: BPF prog-id=36 op=UNLOAD Jan 28 01:11:30.865884 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:11:30.845000 audit: BPF prog-id=51 op=LOAD Jan 28 01:11:30.845000 audit: BPF prog-id=28 op=UNLOAD Jan 28 01:11:30.845000 audit: BPF prog-id=52 op=LOAD Jan 28 01:11:30.845000 audit: BPF prog-id=53 op=LOAD Jan 28 01:11:30.845000 audit: BPF prog-id=29 op=UNLOAD Jan 28 01:11:30.845000 audit: BPF prog-id=30 op=UNLOAD Jan 28 01:11:30.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.528555 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:11:31.557063 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:11:31.586337 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:11:31.611278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:11:31.645000 audit: BPF prog-id=8 op=UNLOAD Jan 28 01:11:31.645000 audit: BPF prog-id=7 op=UNLOAD Jan 28 01:11:31.661000 audit: BPF prog-id=54 op=LOAD Jan 28 01:11:31.661000 audit: BPF prog-id=55 op=LOAD Jan 28 01:11:31.670045 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:11:31.702639 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:11:31.750885 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:11:31.751338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:11:31.767000 audit[1441]: SYSTEM_BOOT pid=1441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.756281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:11:31.776887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:11:31.795107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:11:31.818211 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:11:31.862600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:11:31.863526 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:11:31.864500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:11:31.864679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:11:31.881826 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:11:31.889837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:11:31.917769 systemd-udevd[1440]: Using default interface naming scheme 'v257'. Jan 28 01:11:31.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.929679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:11:31.934487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:11:31.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.955242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:11:31.967381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:11:31.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:31.989287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:11:31.994299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:11:32.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:32.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:32.033554 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:11:32.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:32.153578 systemd[1]: Finished ensure-sysext.service. Jan 28 01:11:32.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:32.354612 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:11:32.465163 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 28 01:11:32.474505 kernel: audit: type=1130 audit(1769562692.424:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:32.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:11:32.549243 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:11:32.549643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:11:32.556000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:11:32.557867 augenrules[1471]: No rules Jan 28 01:11:32.566083 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:11:32.594496 kernel: audit: type=1305 audit(1769562692.556:219): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:11:32.594607 kernel: audit: type=1300 audit(1769562692.556:219): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe392b9390 a2=420 a3=0 items=0 ppid=1435 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:11:32.556000 audit[1471]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe392b9390 a2=420 a3=0 items=0 ppid=1435 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:11:32.701848 kernel: audit: type=1327 audit(1769562692.556:219): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:11:32.556000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:11:32.735225 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:11:32.746637 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:11:32.804749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:11:32.951335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:11:33.184192 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:11:33.230797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:11:33.616856 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:11:33.667639 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:11:33.667910 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:11:34.543137 systemd-networkd[1487]: lo: Link UP Jan 28 01:11:34.543151 systemd-networkd[1487]: lo: Gained carrier Jan 28 01:11:34.553725 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:11:34.591201 systemd[1]: Reached target network.target - Network. Jan 28 01:11:34.636379 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:11:34.664538 systemd-networkd[1487]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:11:34.664546 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:11:34.674519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:11:34.681161 systemd-networkd[1487]: eth0: Link UP Jan 28 01:11:34.687194 systemd-networkd[1487]: eth0: Gained carrier Jan 28 01:11:34.687225 systemd-networkd[1487]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:11:34.762213 systemd-networkd[1487]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:11:34.786136 systemd-timesyncd[1476]: Network configuration changed, trying to establish connection. Jan 28 01:11:34.831229 systemd-timesyncd[1476]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:11:34.834168 systemd-timesyncd[1476]: Initial clock synchronization to Wed 2026-01-28 01:11:34.956208 UTC. Jan 28 01:11:34.863250 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 01:11:35.360191 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:11:35.401185 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:11:35.871532 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:11:36.004204 systemd-networkd[1487]: eth0: Gained IPv6LL Jan 28 01:11:36.007608 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:11:36.032569 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:11:36.039443 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:11:36.065364 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:11:36.217655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:11:36.259708 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:11:37.450478 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:11:37.800806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:11:43.105647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:11:43.418266 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:11:43.507864 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:11:43.550429 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:11:44.219213 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:11:44.238319 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:11:44.276596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:11:44.309125 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:11:44.338746 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:11:44.366608 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:11:44.417859 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:11:44.449085 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 01:11:44.480513 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 01:11:44.501431 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:11:44.526495 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:11:44.526541 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:11:44.543487 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:11:44.567203 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:11:44.697658 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:11:44.751573 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:11:44.772216 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:11:44.788510 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:11:44.829791 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:11:44.846808 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:11:44.863861 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:11:44.885782 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:11:44.896488 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:11:44.908771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:11:44.909178 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:11:44.921548 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:11:44.978436 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:11:45.130524 kernel: kvm_amd: TSC scaling supported Jan 28 01:11:45.131242 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:11:45.131287 kernel: kvm_amd: Nested Paging enabled Jan 28 01:11:45.130315 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:11:45.135197 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:11:45.135245 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:11:45.306309 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:11:45.432407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:11:45.542569 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:11:45.808495 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:11:45.920216 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:11:45.997567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:11:46.039511 jq[1549]: false Jan 28 01:11:46.046818 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:11:46.115732 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:11:46.129808 extend-filesystems[1550]: Found /dev/vda6 Jan 28 01:11:46.141811 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:11:46.185540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:11:46.198345 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing passwd entry cache Jan 28 01:11:46.199218 oslogin_cache_refresh[1551]: Refreshing passwd entry cache Jan 28 01:11:46.208478 extend-filesystems[1550]: Found /dev/vda9 Jan 28 01:11:46.215557 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:11:46.339409 extend-filesystems[1550]: Checking size of /dev/vda9 Jan 28 01:11:46.393375 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting users, quitting Jan 28 01:11:46.393375 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:11:46.383722 oslogin_cache_refresh[1551]: Failure getting users, quitting Jan 28 01:11:46.383828 oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:11:46.436433 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing group entry cache Jan 28 01:11:46.421507 oslogin_cache_refresh[1551]: Refreshing group entry cache Jan 28 01:11:46.436551 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:11:46.458572 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:11:46.480505 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:11:46.524612 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting groups, quitting Jan 28 01:11:46.524612 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:11:46.521566 oslogin_cache_refresh[1551]: Failure getting groups, quitting Jan 28 01:11:46.521710 oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:11:46.527476 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:11:46.570591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:11:46.612783 extend-filesystems[1550]: Resized partition /dev/vda9 Jan 28 01:11:46.695453 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:11:46.747287 extend-filesystems[1580]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:11:46.842336 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 28 01:11:46.759162 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:11:47.016651 jq[1579]: true Jan 28 01:11:46.806843 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:11:46.895174 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:11:46.906731 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:11:47.126145 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:11:47.133128 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:11:47.216219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:11:47.218092 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:11:47.240308 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 28 01:11:47.329430 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:11:47.329430 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:11:47.329430 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 28 01:11:47.320475 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:11:47.503535 update_engine[1573]: I20260128 01:11:47.315842 1573 main.cc:92] Flatcar Update Engine starting Jan 28 01:11:47.523394 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Jan 28 01:11:47.464786 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:11:47.466276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:11:48.120636 jq[1589]: true Jan 28 01:11:48.156648 tar[1587]: linux-amd64/LICENSE Jan 28 01:11:48.156648 tar[1587]: linux-amd64/helm Jan 28 01:11:48.173573 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:11:48.399310 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:11:48.416322 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:11:48.512763 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:11:49.459608 dbus-daemon[1547]: [system] SELinux support is enabled Jan 28 01:11:49.494874 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:11:49.601810 update_engine[1573]: I20260128 01:11:49.592528 1573 update_check_scheduler.cc:74] Next update check in 10m28s Jan 28 01:11:50.017575 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:11:50.017621 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:11:50.041886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:11:50.042176 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:11:50.066244 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:11:50.256885 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:11:50.293320 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:11:51.141702 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:11:51.179779 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:11:51.186281 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:11:51.187765 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:11:51.195283 systemd-logind[1566]: New seat seat0. Jan 28 01:11:51.234525 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:11:51.272562 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:11:52.172813 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:11:52.570911 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:43138.service - OpenSSH per-connection server daemon (10.0.0.1:43138). Jan 28 01:11:52.734848 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:11:53.591647 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:11:53.614575 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:11:53.660280 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:11:54.541568 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:11:54.578843 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:11:54.614148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:11:54.674187 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:11:57.030210 locksmithd[1638]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:11:57.058317 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:11:57.163069 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 43138 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:11:57.172278 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:11:57.221822 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:11:57.246419 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:11:57.312796 systemd-logind[1566]: New session 1 of user core. Jan 28 01:11:57.345750 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:11:57.368277 containerd[1594]: time="2026-01-28T01:11:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:11:57.391647 containerd[1594]: time="2026-01-28T01:11:57.373826682Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 01:11:57.377843 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:11:57.456844 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:11:57.466252 containerd[1594]: time="2026-01-28T01:11:57.465899999Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.563µs" Jan 28 01:11:57.466390 containerd[1594]: time="2026-01-28T01:11:57.466364755Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:11:57.468396 containerd[1594]: time="2026-01-28T01:11:57.468368435Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:11:57.470494 containerd[1594]: time="2026-01-28T01:11:57.470467120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:11:57.473769 containerd[1594]: time="2026-01-28T01:11:57.473743368Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:11:57.473876 containerd[1594]: time="2026-01-28T01:11:57.473858692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:11:57.475268 containerd[1594]: time="2026-01-28T01:11:57.474889836Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:11:57.475360 containerd[1594]: time="2026-01-28T01:11:57.475334908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.476309 containerd[1594]: time="2026-01-28T01:11:57.476278101Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.476406 containerd[1594]: time="2026-01-28T01:11:57.476388125Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:11:57.476656 containerd[1594]: time="2026-01-28T01:11:57.476636177Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:11:57.476748 containerd[1594]: time="2026-01-28T01:11:57.476724178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.477641 containerd[1594]: time="2026-01-28T01:11:57.477422221Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.477728 containerd[1594]: time="2026-01-28T01:11:57.477708647Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:11:57.478121 containerd[1594]: time="2026-01-28T01:11:57.477907675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.478786 containerd[1594]: time="2026-01-28T01:11:57.478762777Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.478886 containerd[1594]: time="2026-01-28T01:11:57.478867259Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:11:57.479637 containerd[1594]: time="2026-01-28T01:11:57.479310946Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:11:57.480096 containerd[1594]: time="2026-01-28T01:11:57.480073401Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:11:57.480765 systemd-logind[1566]: New session 2 of user core. Jan 28 01:11:57.482402 containerd[1594]: time="2026-01-28T01:11:57.482368142Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:11:57.482680 containerd[1594]: time="2026-01-28T01:11:57.482659145Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.516762617Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.517655270Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.517775231Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.517802282Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.517823913Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.518120166Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.518145261Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.518160117Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.518180262Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.518386978Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:11:57.518504 containerd[1594]: time="2026-01-28T01:11:57.518410015Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:11:57.523117 containerd[1594]: time="2026-01-28T01:11:57.518632803Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:11:57.523117 containerd[1594]: time="2026-01-28T01:11:57.518662594Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:11:57.523117 containerd[1594]: time="2026-01-28T01:11:57.518680100Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525241150Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525789751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525819492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525835894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525858409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525880742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.525897123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.526097637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.526120955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.526137949Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.526154863Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:11:57.526387 containerd[1594]: time="2026-01-28T01:11:57.526190165Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:11:57.526887 containerd[1594]: time="2026-01-28T01:11:57.526853899Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:11:57.527696 containerd[1594]: time="2026-01-28T01:11:57.527234207Z" level=info msg="Start snapshots syncer" Jan 28 01:11:57.528543 containerd[1594]: time="2026-01-28T01:11:57.527789564Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:11:57.529851 containerd[1594]: time="2026-01-28T01:11:57.529640016Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:11:57.530797 containerd[1594]: time="2026-01-28T01:11:57.529884013Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:11:57.530797 containerd[1594]: time="2026-01-28T01:11:57.530544374Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:11:57.530797 containerd[1594]: time="2026-01-28T01:11:57.530722895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:11:57.530797 containerd[1594]: time="2026-01-28T01:11:57.530760928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:11:57.530797 containerd[1594]: time="2026-01-28T01:11:57.530780390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:11:57.530797 containerd[1594]: time="2026-01-28T01:11:57.530794203Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.530808747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.530823663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.530836441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.530848056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.530862128Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.530894860Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532757488Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532781981Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532799747Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532812797Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532828404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532845349Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532870293Z" level=info msg="runtime interface created" Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532878001Z" level=info msg="created NRI interface" Jan 28 01:11:57.533205 containerd[1594]: time="2026-01-28T01:11:57.532889254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:11:57.533751 containerd[1594]: time="2026-01-28T01:11:57.532905424Z" level=info msg="Connect containerd service" Jan 28 01:11:57.533751 containerd[1594]: time="2026-01-28T01:11:57.533108497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:11:57.538758 containerd[1594]: time="2026-01-28T01:11:57.538720673Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:11:57.783695 tar[1587]: linux-amd64/README.md Jan 28 01:11:57.870618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:11:57.951751 systemd[1671]: Queued start job for default target default.target. Jan 28 01:11:57.973243 systemd[1671]: Created slice app.slice - User Application Slice. Jan 28 01:11:57.973299 systemd[1671]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 01:11:57.973321 systemd[1671]: Reached target paths.target - Paths. Jan 28 01:11:57.973408 systemd[1671]: Reached target timers.target - Timers. Jan 28 01:11:57.990379 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:11:57.999322 systemd[1671]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 01:11:58.082242 systemd[1671]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 01:11:58.089881 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:11:58.090243 systemd[1671]: Reached target sockets.target - Sockets. Jan 28 01:11:58.090422 systemd[1671]: Reached target basic.target - Basic System. Jan 28 01:11:58.090495 systemd[1671]: Reached target default.target - Main User Target. Jan 28 01:11:58.090540 systemd[1671]: Startup finished in 581ms. Jan 28 01:11:58.090867 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:11:58.096586 containerd[1594]: time="2026-01-28T01:11:58.093302372Z" level=info msg="Start subscribing containerd event" Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.096780630Z" level=info msg="Start recovering state" Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097637528Z" level=info msg="Start event monitor" Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097746736Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097838259Z" level=info msg="Start streaming server" Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097851477Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097861272Z" level=info msg="runtime interface starting up..." Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097869412Z" level=info msg="starting plugins..." Jan 28 01:11:58.102079 containerd[1594]: time="2026-01-28T01:11:58.097887970Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:11:58.105174 containerd[1594]: time="2026-01-28T01:11:58.105145859Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:11:58.105836 containerd[1594]: time="2026-01-28T01:11:58.105813458Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:11:58.116690 containerd[1594]: time="2026-01-28T01:11:58.116182384Z" level=info msg="containerd successfully booted in 0.757471s" Jan 28 01:11:58.127758 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:11:58.175343 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:11:58.321527 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:53848.service - OpenSSH per-connection server daemon (10.0.0.1:53848). Jan 28 01:11:58.584547 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 53848 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:11:58.594433 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:11:58.647234 systemd-logind[1566]: New session 3 of user core. Jan 28 01:11:58.668324 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:11:58.795205 sshd[1708]: Connection closed by 10.0.0.1 port 53848 Jan 28 01:11:58.792287 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 28 01:11:58.837797 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:53848.service: Deactivated successfully. Jan 28 01:11:58.846649 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:11:58.854194 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:11:58.868811 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:53852.service - OpenSSH per-connection server daemon (10.0.0.1:53852). Jan 28 01:11:58.895393 systemd-logind[1566]: Removed session 3. Jan 28 01:11:59.167854 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 53852 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:11:59.169909 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:11:59.218475 systemd-logind[1566]: New session 4 of user core. Jan 28 01:11:59.243284 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:11:59.636645 sshd[1719]: Connection closed by 10.0.0.1 port 53852 Jan 28 01:11:59.635539 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jan 28 01:11:59.651636 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:53852.service: Deactivated successfully. Jan 28 01:11:59.655906 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:11:59.660780 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:11:59.671601 systemd-logind[1566]: Removed session 4. Jan 28 01:12:02.050400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:12:02.054397 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:12:02.057288 systemd[1]: Startup finished in 14.910s (kernel) + 1min 1.104s (initrd) + 1min 4.372s (userspace) = 2min 20.388s. Jan 28 01:12:02.277687 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:12:09.775251 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:56232.service - OpenSSH per-connection server daemon (10.0.0.1:56232). Jan 28 01:12:11.393300 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 56232 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:12:11.408185 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:12:11.508132 systemd-logind[1566]: New session 5 of user core. Jan 28 01:12:11.524710 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:12:11.940472 sshd[1745]: Connection closed by 10.0.0.1 port 56232 Jan 28 01:12:11.943908 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 28 01:12:11.982215 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:56232.service: Deactivated successfully. Jan 28 01:12:11.990897 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:12:12.002377 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:12:12.045352 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:56236.service - OpenSSH per-connection server daemon (10.0.0.1:56236). Jan 28 01:12:12.054145 systemd-logind[1566]: Removed session 5. Jan 28 01:12:12.365242 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 56236 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:12:12.368855 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:12:12.640135 systemd-logind[1566]: New session 6 of user core. Jan 28 01:12:12.749531 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:12:13.058733 sshd[1757]: Connection closed by 10.0.0.1 port 56236 Jan 28 01:12:13.060732 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 28 01:12:13.094365 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:56236.service: Deactivated successfully. Jan 28 01:12:13.100527 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:12:13.104407 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:12:13.110445 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Jan 28 01:12:13.113341 systemd-logind[1566]: Removed session 6. Jan 28 01:12:13.771763 kubelet[1732]: E0128 01:12:13.771090 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:12:13.795896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:12:13.796654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:12:13.798654 systemd[1]: kubelet.service: Consumed 9.348s CPU time, 270.5M memory peak. Jan 28 01:12:13.973449 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:12:13.978414 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:12:14.300870 systemd-logind[1566]: New session 7 of user core. Jan 28 01:12:14.348457 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:12:14.608508 sshd[1768]: Connection closed by 10.0.0.1 port 58788 Jan 28 01:12:14.612196 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jan 28 01:12:14.666734 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:58788.service: Deactivated successfully. Jan 28 01:12:14.677810 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:12:14.685430 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:12:14.701901 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:58798.service - OpenSSH per-connection server daemon (10.0.0.1:58798). Jan 28 01:12:14.706780 systemd-logind[1566]: Removed session 7. Jan 28 01:12:15.564663 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 58798 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:12:15.576891 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:12:15.658884 systemd-logind[1566]: New session 8 of user core. Jan 28 01:12:15.669279 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:12:16.133102 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:12:16.135210 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:12:23.953342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:12:23.958720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:12:29.968445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:12:30.267564 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:12:34.680687 update_engine[1573]: I20260128 01:12:34.675279 1573 update_attempter.cc:509] Updating boot flags... Jan 28 01:12:36.283304 kubelet[1807]: E0128 01:12:36.278662 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:12:36.474187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:12:36.474880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:12:36.544739 systemd[1]: kubelet.service: Consumed 5.280s CPU time, 110.2M memory peak. Jan 28 01:12:39.072856 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:12:39.131547 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:12:46.459902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:12:46.871198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:12:50.407854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:12:50.526400 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:12:52.258281 dockerd[1834]: time="2026-01-28T01:12:52.257207866Z" level=info msg="Starting up" Jan 28 01:12:52.270225 dockerd[1834]: time="2026-01-28T01:12:52.268600658Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:12:52.282830 kubelet[1849]: E0128 01:12:52.282763 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:12:52.298451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:12:52.299403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:12:52.305406 systemd[1]: kubelet.service: Consumed 2.521s CPU time, 109.7M memory peak. Jan 28 01:12:52.427257 dockerd[1834]: time="2026-01-28T01:12:52.424205517Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:12:53.414209 systemd[1]: var-lib-docker-metacopy\x2dcheck227668828-merged.mount: Deactivated successfully. Jan 28 01:12:53.573907 dockerd[1834]: time="2026-01-28T01:12:53.573183470Z" level=info msg="Loading containers: start." Jan 28 01:12:53.696231 kernel: Initializing XFRM netlink socket Jan 28 01:12:59.950858 systemd-networkd[1487]: docker0: Link UP Jan 28 01:13:00.062334 dockerd[1834]: time="2026-01-28T01:13:00.061287427Z" level=info msg="Loading containers: done." Jan 28 01:13:01.352904 dockerd[1834]: time="2026-01-28T01:13:01.350855780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:13:01.352904 dockerd[1834]: time="2026-01-28T01:13:01.351714090Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:13:01.352904 dockerd[1834]: time="2026-01-28T01:13:01.352203034Z" level=info msg="Initializing buildkit" Jan 28 01:13:01.895671 dockerd[1834]: time="2026-01-28T01:13:01.895523147Z" level=info msg="Completed buildkit initialization" Jan 28 01:13:01.988361 dockerd[1834]: time="2026-01-28T01:13:01.985557648Z" level=info msg="Daemon has completed initialization" Jan 28 01:13:01.987837 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:13:01.995555 dockerd[1834]: time="2026-01-28T01:13:01.990844693Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:13:02.429554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:13:02.465090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:13:05.009559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:13:05.159128 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:13:07.210811 kubelet[2074]: E0128 01:13:07.206536 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:13:07.223332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:13:07.223804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:13:07.226226 systemd[1]: kubelet.service: Consumed 2.060s CPU time, 108.9M memory peak. Jan 28 01:13:13.396104 containerd[1594]: time="2026-01-28T01:13:13.393696667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:13:17.455162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:13:17.492236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:13:19.760629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677209452.mount: Deactivated successfully. Jan 28 01:13:21.520612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:13:21.613378 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:13:22.899134 kubelet[2107]: E0128 01:13:22.895602 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:13:22.916432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:13:22.917459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:13:22.923460 systemd[1]: kubelet.service: Consumed 2.287s CPU time, 110.6M memory peak. Jan 28 01:13:32.978808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:13:33.016752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:13:34.808210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:13:34.865861 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:13:35.701325 kubelet[2167]: E0128 01:13:35.700501 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:13:35.719711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:13:35.723017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:13:35.725889 systemd[1]: kubelet.service: Consumed 1.307s CPU time, 110.2M memory peak. Jan 28 01:13:36.581112 containerd[1594]: time="2026-01-28T01:13:36.574210846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:36.581112 containerd[1594]: time="2026-01-28T01:13:36.579781356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29059555" Jan 28 01:13:36.587498 containerd[1594]: time="2026-01-28T01:13:36.587355647Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:36.618231 containerd[1594]: time="2026-01-28T01:13:36.617399817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:36.621806 containerd[1594]: time="2026-01-28T01:13:36.621610584Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 23.227062622s" Jan 28 01:13:36.622134 containerd[1594]: time="2026-01-28T01:13:36.621877187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 01:13:36.658074 containerd[1594]: time="2026-01-28T01:13:36.657679024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:13:45.177364 containerd[1594]: time="2026-01-28T01:13:45.176069569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:45.185382 containerd[1594]: time="2026-01-28T01:13:45.181350228Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 28 01:13:45.185536 containerd[1594]: time="2026-01-28T01:13:45.185405087Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:45.194990 containerd[1594]: time="2026-01-28T01:13:45.194647008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:45.203416 containerd[1594]: time="2026-01-28T01:13:45.202802401Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 8.539634632s" Jan 28 01:13:45.203416 containerd[1594]: time="2026-01-28T01:13:45.202858477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 01:13:45.211765 containerd[1594]: time="2026-01-28T01:13:45.210702023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:13:45.937551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:13:45.967413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:13:46.866378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:13:46.911345 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:13:47.804892 kubelet[2193]: E0128 01:13:47.804731 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:13:47.825897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:13:47.826843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:13:47.837237 systemd[1]: kubelet.service: Consumed 797ms CPU time, 110.7M memory peak. Jan 28 01:13:54.685429 containerd[1594]: time="2026-01-28T01:13:54.679292387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:54.693837 containerd[1594]: time="2026-01-28T01:13:54.689725811Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 28 01:13:54.697874 containerd[1594]: time="2026-01-28T01:13:54.696414267Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:54.714403 containerd[1594]: time="2026-01-28T01:13:54.714293262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:13:54.871832 containerd[1594]: time="2026-01-28T01:13:54.870568009Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 9.659763332s" Jan 28 01:13:54.871832 containerd[1594]: time="2026-01-28T01:13:54.871084822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 01:13:54.984119 containerd[1594]: time="2026-01-28T01:13:54.967848938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:13:57.930866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:13:57.943513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:14:00.006609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:14:00.103818 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:14:01.365093 kubelet[2214]: E0128 01:14:01.364141 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:14:01.387068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:14:01.387335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:14:01.393086 systemd[1]: kubelet.service: Consumed 2.332s CPU time, 111.8M memory peak. Jan 28 01:14:02.484671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171225390.mount: Deactivated successfully. Jan 28 01:14:11.447363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:14:11.463558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:14:14.225443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:14:14.282556 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:14:14.666412 containerd[1594]: time="2026-01-28T01:14:14.660339233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:14:14.680400 containerd[1594]: time="2026-01-28T01:14:14.679776184Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31159536" Jan 28 01:14:14.689176 containerd[1594]: time="2026-01-28T01:14:14.688548648Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:14:14.700767 containerd[1594]: time="2026-01-28T01:14:14.700318917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:14:14.704397 containerd[1594]: time="2026-01-28T01:14:14.701659722Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 19.733589016s" Jan 28 01:14:14.704397 containerd[1594]: time="2026-01-28T01:14:14.701799936Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 01:14:14.710347 containerd[1594]: time="2026-01-28T01:14:14.709604554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:14:14.925356 kubelet[2235]: E0128 01:14:14.924480 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:14:14.961872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:14:14.962419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:14:14.968525 systemd[1]: kubelet.service: Consumed 1.226s CPU time, 108.6M memory peak. Jan 28 01:14:18.521315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435125755.mount: Deactivated successfully. Jan 28 01:14:25.209776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 28 01:14:25.254535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:14:27.658288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:14:27.693464 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:14:29.829539 kubelet[2305]: E0128 01:14:29.825084 2305 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:14:29.899356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:14:29.902109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:14:30.115528 systemd[1]: kubelet.service: Consumed 1.861s CPU time, 114.9M memory peak. Jan 28 01:14:37.783315 containerd[1594]: time="2026-01-28T01:14:37.780852393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:14:37.800263 containerd[1594]: time="2026-01-28T01:14:37.797869004Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18554426" Jan 28 01:14:37.817164 containerd[1594]: time="2026-01-28T01:14:37.815804501Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:14:37.857725 containerd[1594]: time="2026-01-28T01:14:37.857663464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:14:37.868295 containerd[1594]: time="2026-01-28T01:14:37.868239444Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 23.158502351s" Jan 28 01:14:37.868858 containerd[1594]: time="2026-01-28T01:14:37.868650114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 01:14:37.951821 containerd[1594]: time="2026-01-28T01:14:37.951481199Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:14:39.935697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 28 01:14:39.964343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:14:40.040532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994545692.mount: Deactivated successfully. Jan 28 01:14:40.103728 containerd[1594]: time="2026-01-28T01:14:40.102705577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:14:40.113026 containerd[1594]: time="2026-01-28T01:14:40.112874433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=317082" Jan 28 01:14:40.123112 containerd[1594]: time="2026-01-28T01:14:40.122145412Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:14:40.145764 containerd[1594]: time="2026-01-28T01:14:40.143735199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:14:40.145764 containerd[1594]: time="2026-01-28T01:14:40.145288656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.174664406s" Jan 28 01:14:40.145764 containerd[1594]: time="2026-01-28T01:14:40.145323952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:14:40.155310 containerd[1594]: time="2026-01-28T01:14:40.154698577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:14:41.348754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:14:41.412861 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:14:41.983524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495354490.mount: Deactivated successfully. Jan 28 01:14:42.245870 kubelet[2326]: E0128 01:14:42.242740 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:14:42.260616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:14:42.262266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:14:42.268386 systemd[1]: kubelet.service: Consumed 1.329s CPU time, 110.7M memory peak. Jan 28 01:14:52.440521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 28 01:14:52.459707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:14:54.184545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:14:54.270819 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:14:55.474876 kubelet[2393]: E0128 01:14:55.473705 2393 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:14:55.494735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:14:55.496336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:14:55.501346 systemd[1]: kubelet.service: Consumed 1.604s CPU time, 110.6M memory peak. Jan 28 01:15:05.903450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 28 01:15:06.141874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:15:11.251575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:15:11.451463 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:15:16.134353 kubelet[2414]: E0128 01:15:16.128470 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:15:16.188387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:15:16.192419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:15:16.199238 systemd[1]: kubelet.service: Consumed 3.292s CPU time, 110.3M memory peak. Jan 28 01:15:17.987694 containerd[1594]: time="2026-01-28T01:15:17.982623031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:15:18.004595 containerd[1594]: time="2026-01-28T01:15:18.004538747Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56514996" Jan 28 01:15:18.021452 containerd[1594]: time="2026-01-28T01:15:18.016895417Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:15:18.035657 containerd[1594]: time="2026-01-28T01:15:18.034564002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:15:18.037728 containerd[1594]: time="2026-01-28T01:15:18.037480298Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 37.880603306s" Jan 28 01:15:18.037728 containerd[1594]: time="2026-01-28T01:15:18.037634154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:15:26.435547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 28 01:15:26.476649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:15:28.042778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:15:28.173579 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:15:29.288605 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:15:29.389358 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:15:29.396088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:15:29.403238 systemd[1]: kubelet.service: Consumed 994ms CPU time, 109.8M memory peak. Jan 28 01:15:29.449375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:15:29.665871 systemd[1]: Reload requested from client PID 2467 ('systemctl') (unit session-8.scope)... Jan 28 01:15:29.666201 systemd[1]: Reloading... Jan 28 01:15:30.476631 zram_generator::config[2513]: No configuration found. Jan 28 01:15:31.618674 systemd[1]: Reloading finished in 1943 ms. Jan 28 01:15:31.955398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:15:31.980687 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:15:31.981597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:15:31.981779 systemd[1]: kubelet.service: Consumed 477ms CPU time, 98.5M memory peak. Jan 28 01:15:31.989896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:15:33.166317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:15:33.215346 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:15:33.825437 kubelet[2563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:15:33.825437 kubelet[2563]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:15:33.825437 kubelet[2563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:15:33.838491 kubelet[2563]: I0128 01:15:33.838093 2563 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:15:34.649758 kubelet[2563]: I0128 01:15:34.648343 2563 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:15:34.649758 kubelet[2563]: I0128 01:15:34.650084 2563 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:15:34.653235 kubelet[2563]: I0128 01:15:34.652476 2563 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:15:34.921582 kubelet[2563]: E0128 01:15:34.919732 2563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:34.938853 kubelet[2563]: I0128 01:15:34.936888 2563 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:15:34.997238 kubelet[2563]: I0128 01:15:34.996801 2563 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:15:35.037836 kubelet[2563]: I0128 01:15:35.037601 2563 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:15:35.037836 kubelet[2563]: I0128 01:15:35.038733 2563 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:15:35.040272 kubelet[2563]: I0128 01:15:35.038780 2563 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:15:35.040272 kubelet[2563]: I0128 01:15:35.039609 2563 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:15:35.040272 kubelet[2563]: I0128 01:15:35.039627 2563 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:15:35.044553 kubelet[2563]: I0128 01:15:35.041455 2563 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:15:35.056486 kubelet[2563]: I0128 01:15:35.053850 2563 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:15:35.059604 kubelet[2563]: I0128 01:15:35.059583 2563 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:15:35.061476 kubelet[2563]: I0128 01:15:35.060460 2563 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:15:35.061476 kubelet[2563]: I0128 01:15:35.060819 2563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:15:35.083293 kubelet[2563]: I0128 01:15:35.081703 2563 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:15:35.085000 kubelet[2563]: I0128 01:15:35.083597 2563 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:15:35.085000 kubelet[2563]: W0128 01:15:35.083875 2563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:15:35.087674 kubelet[2563]: W0128 01:15:35.087358 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:35.087674 kubelet[2563]: E0128 01:15:35.087532 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:35.089902 kubelet[2563]: W0128 01:15:35.089592 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:35.090375 kubelet[2563]: E0128 01:15:35.089752 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:35.095755 kubelet[2563]: I0128 01:15:35.095607 2563 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:15:35.096363 kubelet[2563]: I0128 01:15:35.095823 2563 server.go:1287] "Started kubelet" Jan 28 01:15:35.098927 kubelet[2563]: I0128 01:15:35.098744 2563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:15:35.104583 kubelet[2563]: I0128 01:15:35.097426 2563 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:15:35.108767 kubelet[2563]: I0128 01:15:35.108532 2563 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:15:35.118297 kubelet[2563]: I0128 01:15:35.117767 2563 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:15:35.200517 kubelet[2563]: I0128 01:15:35.142892 2563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:15:35.200517 kubelet[2563]: I0128 01:15:35.146420 2563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:15:35.200517 kubelet[2563]: I0128 01:15:35.193327 2563 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:15:35.972146 kubelet[2563]: E0128 01:15:35.963674 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:15:35.972146 kubelet[2563]: I0128 01:15:35.967221 2563 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:15:35.972146 kubelet[2563]: I0128 01:15:35.968381 2563 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:15:35.972146 kubelet[2563]: E0128 01:15:35.969634 2563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec01d2a13c5f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:15:35.095707126 +0000 UTC m=+1.842443625,LastTimestamp:2026-01-28 01:15:35.095707126 +0000 UTC m=+1.842443625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:15:35.972146 kubelet[2563]: W0128 01:15:35.971653 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:35.972146 kubelet[2563]: E0128 01:15:35.971852 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:35.978776 kubelet[2563]: E0128 01:15:35.978739 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Jan 28 01:15:35.986474 kubelet[2563]: E0128 01:15:35.985262 2563 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:15:35.990128 kubelet[2563]: I0128 01:15:35.989735 2563 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:15:35.990264 kubelet[2563]: I0128 01:15:35.990239 2563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:15:35.994681 kubelet[2563]: I0128 01:15:35.994656 2563 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:15:36.211296 kubelet[2563]: E0128 01:15:36.210722 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Jan 28 01:15:36.216458 kubelet[2563]: E0128 01:15:36.076310 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:15:36.216888 kubelet[2563]: W0128 01:15:36.214208 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:36.217420 kubelet[2563]: E0128 01:15:36.217371 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:36.273901 kubelet[2563]: I0128 01:15:36.272313 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:15:36.302138 kubelet[2563]: I0128 01:15:36.301486 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:15:36.304892 kubelet[2563]: I0128 01:15:36.302623 2563 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:15:36.304892 kubelet[2563]: I0128 01:15:36.302752 2563 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:15:36.304892 kubelet[2563]: I0128 01:15:36.302766 2563 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:15:36.304892 kubelet[2563]: I0128 01:15:36.302883 2563 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:15:36.304892 kubelet[2563]: I0128 01:15:36.302898 2563 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:15:36.304892 kubelet[2563]: I0128 01:15:36.303107 2563 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:15:36.304892 kubelet[2563]: E0128 01:15:36.303293 2563 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:15:36.309842 kubelet[2563]: W0128 01:15:36.309738 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:36.309842 kubelet[2563]: E0128 01:15:36.309775 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:36.317240 kubelet[2563]: E0128 01:15:36.317208 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:15:36.331398 kubelet[2563]: I0128 01:15:36.330130 2563 policy_none.go:49] "None policy: Start" Jan 28 01:15:36.331398 kubelet[2563]: I0128 01:15:36.330167 2563 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:15:36.331398 kubelet[2563]: I0128 01:15:36.330185 2563 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:15:36.391337 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:15:36.405781 kubelet[2563]: E0128 01:15:36.404669 2563 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:15:36.418855 kubelet[2563]: E0128 01:15:36.418817 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:15:36.431102 kubelet[2563]: W0128 01:15:36.430859 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:36.431299 kubelet[2563]: E0128 01:15:36.431113 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:36.453500 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:15:36.479712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:15:36.498907 kubelet[2563]: I0128 01:15:36.498872 2563 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:15:36.500361 kubelet[2563]: I0128 01:15:36.500240 2563 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:15:36.500829 kubelet[2563]: I0128 01:15:36.500779 2563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:15:36.504742 kubelet[2563]: I0128 01:15:36.504714 2563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:15:36.508159 kubelet[2563]: E0128 01:15:36.506788 2563 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:15:36.508159 kubelet[2563]: E0128 01:15:36.507433 2563 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:15:36.619094 kubelet[2563]: I0128 01:15:36.615666 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:36.619434 kubelet[2563]: E0128 01:15:36.619337 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Jan 28 01:15:36.619485 kubelet[2563]: E0128 01:15:36.619462 2563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 28 01:15:36.647835 systemd[1]: Created slice kubepods-burstable-poda819234f16ac055e589bf043a7ab3b5a.slice - libcontainer container kubepods-burstable-poda819234f16ac055e589bf043a7ab3b5a.slice. Jan 28 01:15:36.675859 kubelet[2563]: E0128 01:15:36.675565 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:36.681376 kubelet[2563]: I0128 01:15:36.680287 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:15:36.681376 kubelet[2563]: I0128 01:15:36.680334 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:15:36.681376 kubelet[2563]: I0128 01:15:36.680367 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:15:36.683618 kubelet[2563]: I0128 01:15:36.683374 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a819234f16ac055e589bf043a7ab3b5a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a819234f16ac055e589bf043a7ab3b5a\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:15:36.683618 kubelet[2563]: I0128 01:15:36.683516 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:15:36.683618 kubelet[2563]: I0128 01:15:36.683542 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:15:36.684354 kubelet[2563]: I0128 01:15:36.683897 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:15:36.684404 kubelet[2563]: I0128 01:15:36.684372 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a819234f16ac055e589bf043a7ab3b5a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a819234f16ac055e589bf043a7ab3b5a\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:15:36.684404 kubelet[2563]: I0128 01:15:36.684396 2563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a819234f16ac055e589bf043a7ab3b5a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a819234f16ac055e589bf043a7ab3b5a\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:15:36.687761 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 28 01:15:36.725478 kubelet[2563]: E0128 01:15:36.725350 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:36.734708 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 28 01:15:36.743285 kubelet[2563]: E0128 01:15:36.742878 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:36.826749 kubelet[2563]: I0128 01:15:36.824853 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:36.829775 kubelet[2563]: E0128 01:15:36.828351 2563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 28 01:15:36.940671 kubelet[2563]: W0128 01:15:36.939516 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:36.940671 kubelet[2563]: E0128 01:15:36.939644 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:36.947844 kubelet[2563]: E0128 01:15:36.947517 2563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:36.983131 kubelet[2563]: E0128 01:15:36.981843 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:36.984143 containerd[1594]: time="2026-01-28T01:15:36.983615245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a819234f16ac055e589bf043a7ab3b5a,Namespace:kube-system,Attempt:0,}" Jan 28 01:15:37.053338 kubelet[2563]: E0128 01:15:37.052237 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:37.055519 kubelet[2563]: E0128 01:15:37.055455 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:37.064339 containerd[1594]: time="2026-01-28T01:15:37.062253570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:15:37.076480 containerd[1594]: time="2026-01-28T01:15:37.076439144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:15:37.271476 kubelet[2563]: I0128 01:15:37.261198 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:37.348473 kubelet[2563]: E0128 01:15:37.346545 2563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 28 01:15:37.453216 kubelet[2563]: E0128 01:15:37.451210 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Jan 28 01:15:37.667464 kubelet[2563]: W0128 01:15:37.666699 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:37.668376 kubelet[2563]: E0128 01:15:37.667363 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:37.714555 containerd[1594]: time="2026-01-28T01:15:37.711781395Z" level=info msg="connecting to shim a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616" address="unix:///run/containerd/s/9259012898278871edc98fd0acacf5cf7cdcefb9ba4a3e9d177a1ce3cc3aa1ca" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:15:37.774137 containerd[1594]: time="2026-01-28T01:15:37.772460719Z" level=info msg="connecting to shim 1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b" address="unix:///run/containerd/s/e73726ceb302ce1da33faddc904a44e1a64f76dabfaa90021479d36282d0910f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:15:37.847459 containerd[1594]: time="2026-01-28T01:15:37.847398565Z" level=info msg="connecting to shim 3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64" address="unix:///run/containerd/s/1f864374848d76d6bf0f27d8aecd4c2c56ebda5690fd53a4bc73c57ac12c6146" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:15:37.976259 systemd[1]: Started cri-containerd-a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616.scope - libcontainer container a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616. Jan 28 01:15:38.216575 kubelet[2563]: I0128 01:15:38.215820 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:38.218661 kubelet[2563]: E0128 01:15:38.216854 2563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 28 01:15:38.433305 kubelet[2563]: W0128 01:15:38.431878 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:38.433305 kubelet[2563]: E0128 01:15:38.432736 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:38.533369 systemd[1]: Started cri-containerd-1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b.scope - libcontainer container 1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b. Jan 28 01:15:38.605885 systemd[1]: Started cri-containerd-3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64.scope - libcontainer container 3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64. Jan 28 01:15:39.061689 kubelet[2563]: E0128 01:15:39.056648 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="3.2s" Jan 28 01:15:39.185440 kubelet[2563]: W0128 01:15:39.184663 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:39.185440 kubelet[2563]: E0128 01:15:39.184872 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:39.223754 kubelet[2563]: W0128 01:15:39.223312 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:39.223754 kubelet[2563]: E0128 01:15:39.223538 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:39.605527 containerd[1594]: time="2026-01-28T01:15:39.604896170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616\"" Jan 28 01:15:39.623180 kubelet[2563]: E0128 01:15:39.616655 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:39.641230 containerd[1594]: time="2026-01-28T01:15:39.639211298Z" level=info msg="CreateContainer within sandbox \"a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:15:39.818805 containerd[1594]: time="2026-01-28T01:15:39.816520304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a819234f16ac055e589bf043a7ab3b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b\"" Jan 28 01:15:39.830613 containerd[1594]: time="2026-01-28T01:15:39.829671385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64\"" Jan 28 01:15:39.844486 kubelet[2563]: I0128 01:15:39.844207 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:39.851153 kubelet[2563]: E0128 01:15:39.848730 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:39.869412 kubelet[2563]: E0128 01:15:39.863544 2563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 28 01:15:39.869626 containerd[1594]: time="2026-01-28T01:15:39.867177880Z" level=info msg="CreateContainer within sandbox \"3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:15:39.875604 kubelet[2563]: E0128 01:15:39.875485 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:39.876176 containerd[1594]: time="2026-01-28T01:15:39.876142346Z" level=info msg="Container 182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:15:39.921153 containerd[1594]: time="2026-01-28T01:15:39.920735316Z" level=info msg="CreateContainer within sandbox \"1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:15:39.928152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629183607.mount: Deactivated successfully. Jan 28 01:15:39.955512 containerd[1594]: time="2026-01-28T01:15:39.953379461Z" level=info msg="Container 82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:15:39.955512 containerd[1594]: time="2026-01-28T01:15:39.954539232Z" level=info msg="CreateContainer within sandbox \"a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2\"" Jan 28 01:15:39.957538 containerd[1594]: time="2026-01-28T01:15:39.956789967Z" level=info msg="StartContainer for \"182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2\"" Jan 28 01:15:39.962287 containerd[1594]: time="2026-01-28T01:15:39.960739125Z" level=info msg="connecting to shim 182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2" address="unix:///run/containerd/s/9259012898278871edc98fd0acacf5cf7cdcefb9ba4a3e9d177a1ce3cc3aa1ca" protocol=ttrpc version=3 Jan 28 01:15:40.016866 containerd[1594]: time="2026-01-28T01:15:40.016665217Z" level=info msg="CreateContainer within sandbox \"3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4\"" Jan 28 01:15:40.021777 containerd[1594]: time="2026-01-28T01:15:40.021735995Z" level=info msg="StartContainer for \"82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4\"" Jan 28 01:15:40.034477 containerd[1594]: time="2026-01-28T01:15:40.034448294Z" level=info msg="connecting to shim 82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4" address="unix:///run/containerd/s/1f864374848d76d6bf0f27d8aecd4c2c56ebda5690fd53a4bc73c57ac12c6146" protocol=ttrpc version=3 Jan 28 01:15:40.047440 containerd[1594]: time="2026-01-28T01:15:40.047295140Z" level=info msg="Container 6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:15:40.086872 containerd[1594]: time="2026-01-28T01:15:40.082377683Z" level=info msg="CreateContainer within sandbox \"1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516\"" Jan 28 01:15:40.086872 containerd[1594]: time="2026-01-28T01:15:40.086571928Z" level=info msg="StartContainer for \"6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516\"" Jan 28 01:15:40.091735 containerd[1594]: time="2026-01-28T01:15:40.090444676Z" level=info msg="connecting to shim 6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516" address="unix:///run/containerd/s/e73726ceb302ce1da33faddc904a44e1a64f76dabfaa90021479d36282d0910f" protocol=ttrpc version=3 Jan 28 01:15:40.161513 systemd[1]: Started cri-containerd-182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2.scope - libcontainer container 182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2. Jan 28 01:15:40.282646 systemd[1]: Started cri-containerd-6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516.scope - libcontainer container 6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516. Jan 28 01:15:40.352427 systemd[1]: Started cri-containerd-82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4.scope - libcontainer container 82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4. Jan 28 01:15:40.690283 kubelet[2563]: W0128 01:15:40.677906 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jan 28 01:15:40.690283 kubelet[2563]: E0128 01:15:40.689466 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:41.158783 kubelet[2563]: E0128 01:15:41.117251 2563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:15:41.464799 containerd[1594]: time="2026-01-28T01:15:41.464302112Z" level=info msg="StartContainer for \"6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516\" returns successfully" Jan 28 01:15:41.525719 containerd[1594]: time="2026-01-28T01:15:41.525615379Z" level=info msg="StartContainer for \"182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2\" returns successfully" Jan 28 01:15:41.732608 containerd[1594]: time="2026-01-28T01:15:41.724718037Z" level=info msg="StartContainer for \"82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4\" returns successfully" Jan 28 01:15:42.140627 kubelet[2563]: E0128 01:15:42.130889 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:42.140627 kubelet[2563]: E0128 01:15:42.131338 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:42.152025 kubelet[2563]: E0128 01:15:42.151492 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:42.152025 kubelet[2563]: E0128 01:15:42.151872 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:42.155291 kubelet[2563]: E0128 01:15:42.154828 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:42.155578 kubelet[2563]: E0128 01:15:42.155397 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:42.267238 kubelet[2563]: E0128 01:15:42.266866 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="6.4s" Jan 28 01:15:43.068340 kubelet[2563]: I0128 01:15:43.068191 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:43.161880 kubelet[2563]: E0128 01:15:43.161618 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:43.164230 kubelet[2563]: E0128 01:15:43.163808 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:43.168228 kubelet[2563]: E0128 01:15:43.164186 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:43.168526 kubelet[2563]: E0128 01:15:43.168208 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:43.173904 kubelet[2563]: E0128 01:15:43.173773 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:43.174905 kubelet[2563]: E0128 01:15:43.174784 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:44.348547 kubelet[2563]: E0128 01:15:44.348261 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:44.354716 kubelet[2563]: E0128 01:15:44.354691 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:44.357069 kubelet[2563]: E0128 01:15:44.356267 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:44.357271 kubelet[2563]: E0128 01:15:44.357253 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:46.151413 kubelet[2563]: E0128 01:15:46.146357 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:46.156765 kubelet[2563]: E0128 01:15:46.146902 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:46.512363 kubelet[2563]: E0128 01:15:46.509656 2563 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:15:48.369498 kubelet[2563]: E0128 01:15:48.365897 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:48.369498 kubelet[2563]: E0128 01:15:48.366876 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:52.772848 kubelet[2563]: W0128 01:15:52.761476 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:15:52.772848 kubelet[2563]: E0128 01:15:52.770459 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:15:52.970334 kubelet[2563]: E0128 01:15:52.967780 2563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188ec01d2a13c5f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:15:35.095707126 +0000 UTC m=+1.842443625,LastTimestamp:2026-01-28 01:15:35.095707126 +0000 UTC m=+1.842443625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:15:53.141362 kubelet[2563]: E0128 01:15:53.131139 2563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 28 01:15:53.163633 kubelet[2563]: W0128 01:15:53.126701 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:15:53.186753 kubelet[2563]: E0128 01:15:53.181257 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:15:54.432635 kubelet[2563]: W0128 01:15:54.431373 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:15:54.439812 kubelet[2563]: E0128 01:15:54.434861 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:15:54.674265 kubelet[2563]: W0128 01:15:54.672859 2563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:15:54.677822 kubelet[2563]: E0128 01:15:54.676601 2563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:15:56.647080 kubelet[2563]: E0128 01:15:56.644623 2563 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:15:57.754657 kubelet[2563]: E0128 01:15:57.753513 2563 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:15:58.447353 kubelet[2563]: E0128 01:15:58.446733 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:58.449227 kubelet[2563]: E0128 01:15:58.448497 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:58.457803 kubelet[2563]: E0128 01:15:58.455711 2563 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:15:58.460862 kubelet[2563]: E0128 01:15:58.460122 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:15:59.006125 kubelet[2563]: E0128 01:15:58.997652 2563 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:15:59.553109 kubelet[2563]: I0128 01:15:59.552235 2563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:15:59.817226 kubelet[2563]: I0128 01:15:59.796569 2563 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:15:59.819815 kubelet[2563]: E0128 01:15:59.819783 2563 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:16:00.194133 kubelet[2563]: E0128 01:16:00.191566 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:16:00.306266 kubelet[2563]: E0128 01:16:00.305648 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:16:00.431355 kubelet[2563]: E0128 01:16:00.423126 2563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:16:00.579492 kubelet[2563]: I0128 01:16:00.570273 2563 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:16:01.136694 kubelet[2563]: I0128 01:16:01.135759 2563 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:16:01.261883 kubelet[2563]: I0128 01:16:01.259737 2563 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:16:01.483647 kubelet[2563]: I0128 01:16:01.433677 2563 apiserver.go:52] "Watching apiserver" Jan 28 01:16:01.850174 kubelet[2563]: E0128 01:16:01.843834 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:01.850174 kubelet[2563]: E0128 01:16:01.844345 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:01.850174 kubelet[2563]: E0128 01:16:01.847871 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:01.874894 kubelet[2563]: I0128 01:16:01.874860 2563 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:16:08.332237 kubelet[2563]: I0128 01:16:08.331800 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.331416834 podStartE2EDuration="7.331416834s" podCreationTimestamp="2026-01-28 01:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:16:07.946273398 +0000 UTC m=+34.693009916" watchObservedRunningTime="2026-01-28 01:16:08.331416834 +0000 UTC m=+35.078153333" Jan 28 01:16:08.348468 kubelet[2563]: I0128 01:16:08.340886 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.340865258000001 podStartE2EDuration="8.340865258s" podCreationTimestamp="2026-01-28 01:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:16:08.290398761 +0000 UTC m=+35.037135279" watchObservedRunningTime="2026-01-28 01:16:08.340865258 +0000 UTC m=+35.087601776" Jan 28 01:16:16.782397 kubelet[2563]: E0128 01:16:16.778452 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:17.234377 kubelet[2563]: E0128 01:16:17.226191 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:17.568612 kubelet[2563]: I0128 01:16:17.558187 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=16.558163063 podStartE2EDuration="16.558163063s" podCreationTimestamp="2026-01-28 01:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:16:08.416432326 +0000 UTC m=+35.163168823" watchObservedRunningTime="2026-01-28 01:16:17.558163063 +0000 UTC m=+44.304899581" Jan 28 01:16:19.912469 kubelet[2563]: E0128 01:16:19.912270 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:20.355766 systemd[1]: Reload requested from client PID 2852 ('systemctl') (unit session-8.scope)... Jan 28 01:16:20.356847 systemd[1]: Reloading... Jan 28 01:16:21.592479 zram_generator::config[2901]: No configuration found. Jan 28 01:16:23.230285 systemd[1]: Reloading finished in 2871 ms. Jan 28 01:16:23.450334 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:16:23.534373 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:16:23.538277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:16:23.538530 systemd[1]: kubelet.service: Consumed 13.894s CPU time, 138.6M memory peak. Jan 28 01:16:23.578734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:16:24.844258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:16:25.054592 (kubelet)[2942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:16:26.016369 kubelet[2942]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:16:26.016369 kubelet[2942]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:16:26.016369 kubelet[2942]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:16:26.016369 kubelet[2942]: I0128 01:16:26.013721 2942 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:16:26.110754 kubelet[2942]: I0128 01:16:26.106747 2942 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:16:26.110754 kubelet[2942]: I0128 01:16:26.106793 2942 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:16:26.111746 kubelet[2942]: I0128 01:16:26.110816 2942 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:16:26.120836 kubelet[2942]: I0128 01:16:26.120549 2942 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:16:26.167838 kubelet[2942]: I0128 01:16:26.167659 2942 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:16:26.313788 kubelet[2942]: I0128 01:16:26.311371 2942 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:16:26.418394 kubelet[2942]: I0128 01:16:26.417525 2942 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:16:26.418528 kubelet[2942]: I0128 01:16:26.418433 2942 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:16:26.425313 kubelet[2942]: I0128 01:16:26.418484 2942 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:16:26.425313 kubelet[2942]: I0128 01:16:26.422368 2942 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:16:26.425313 kubelet[2942]: I0128 01:16:26.422391 2942 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:16:26.425313 kubelet[2942]: I0128 01:16:26.422462 2942 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:16:26.425313 kubelet[2942]: I0128 01:16:26.422643 2942 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:16:26.434351 kubelet[2942]: I0128 01:16:26.422671 2942 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:16:26.434351 kubelet[2942]: I0128 01:16:26.422705 2942 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:16:26.434351 kubelet[2942]: I0128 01:16:26.422720 2942 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:16:26.493780 kubelet[2942]: I0128 01:16:26.493718 2942 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:16:26.521140 kubelet[2942]: I0128 01:16:26.520651 2942 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:16:26.578429 kubelet[2942]: I0128 01:16:26.560541 2942 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:16:26.578429 kubelet[2942]: I0128 01:16:26.571405 2942 server.go:1287] "Started kubelet" Jan 28 01:16:26.578429 kubelet[2942]: I0128 01:16:26.572295 2942 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:16:26.578429 kubelet[2942]: I0128 01:16:26.574263 2942 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:16:26.578429 kubelet[2942]: I0128 01:16:26.574552 2942 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:16:26.599609 kubelet[2942]: I0128 01:16:26.599580 2942 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:16:26.616323 kubelet[2942]: I0128 01:16:26.616296 2942 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:16:26.636179 kubelet[2942]: I0128 01:16:26.628883 2942 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:16:26.660445 kubelet[2942]: I0128 01:16:26.660409 2942 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:16:26.660900 kubelet[2942]: E0128 01:16:26.660879 2942 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:16:26.661585 kubelet[2942]: I0128 01:16:26.661566 2942 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:16:26.665560 kubelet[2942]: I0128 01:16:26.665541 2942 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:16:26.671323 kubelet[2942]: I0128 01:16:26.671298 2942 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:16:26.673284 kubelet[2942]: I0128 01:16:26.673258 2942 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:16:27.296743 kubelet[2942]: E0128 01:16:27.296698 2942 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:16:27.453733 kubelet[2942]: I0128 01:16:27.453577 2942 apiserver.go:52] "Watching apiserver" Jan 28 01:16:27.458568 kubelet[2942]: I0128 01:16:27.458204 2942 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:16:27.905312 kubelet[2942]: I0128 01:16:27.904650 2942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:16:27.954344 kubelet[2942]: I0128 01:16:27.954308 2942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:16:27.956629 kubelet[2942]: I0128 01:16:27.956607 2942 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:16:27.956750 kubelet[2942]: I0128 01:16:27.956732 2942 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:16:27.956841 kubelet[2942]: I0128 01:16:27.956825 2942 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:16:27.957353 kubelet[2942]: E0128 01:16:27.957323 2942 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:16:28.064650 kubelet[2942]: E0128 01:16:28.064418 2942 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:16:28.269325 kubelet[2942]: E0128 01:16:28.265908 2942 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.333530 2942 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.333657 2942 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.333826 2942 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.335303 2942 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.335324 2942 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.335351 2942 policy_none.go:49] "None policy: Start" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.335365 2942 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.335381 2942 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:16:28.338356 kubelet[2942]: I0128 01:16:28.335602 2942 state_mem.go:75] "Updated machine memory state" Jan 28 01:16:28.411205 kubelet[2942]: I0128 01:16:28.407236 2942 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:16:28.411205 kubelet[2942]: I0128 01:16:28.407595 2942 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:16:28.411205 kubelet[2942]: I0128 01:16:28.407611 2942 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:16:28.415339 kubelet[2942]: I0128 01:16:28.414251 2942 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:16:28.427181 kubelet[2942]: E0128 01:16:28.426430 2942 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:16:28.664484 kubelet[2942]: I0128 01:16:28.616857 2942 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:16:28.664484 kubelet[2942]: I0128 01:16:28.658286 2942 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:16:28.665266 containerd[1594]: time="2026-01-28T01:16:28.655904947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:16:28.833397 kubelet[2942]: I0128 01:16:28.828180 2942 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:16:29.051418 kubelet[2942]: I0128 01:16:29.028822 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:16:29.051418 kubelet[2942]: I0128 01:16:29.045519 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:16:29.051418 kubelet[2942]: I0128 01:16:29.045605 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:16:29.051418 kubelet[2942]: I0128 01:16:29.045711 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98a96929-0f0c-4080-9491-504cbb5df26c-kube-proxy\") pod \"kube-proxy-72bwx\" (UID: \"98a96929-0f0c-4080-9491-504cbb5df26c\") " pod="kube-system/kube-proxy-72bwx" Jan 28 01:16:29.051418 kubelet[2942]: I0128 01:16:29.045741 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98a96929-0f0c-4080-9491-504cbb5df26c-lib-modules\") pod \"kube-proxy-72bwx\" (UID: \"98a96929-0f0c-4080-9491-504cbb5df26c\") " pod="kube-system/kube-proxy-72bwx" Jan 28 01:16:29.055866 kubelet[2942]: I0128 01:16:29.045769 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlzg\" (UniqueName: \"kubernetes.io/projected/98a96929-0f0c-4080-9491-504cbb5df26c-kube-api-access-5xlzg\") pod \"kube-proxy-72bwx\" (UID: \"98a96929-0f0c-4080-9491-504cbb5df26c\") " pod="kube-system/kube-proxy-72bwx" Jan 28 01:16:29.055866 kubelet[2942]: I0128 01:16:29.045836 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a819234f16ac055e589bf043a7ab3b5a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a819234f16ac055e589bf043a7ab3b5a\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:16:29.055866 kubelet[2942]: I0128 01:16:29.045867 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a819234f16ac055e589bf043a7ab3b5a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a819234f16ac055e589bf043a7ab3b5a\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:16:29.083660 systemd[1]: Created slice kubepods-besteffort-pod98a96929_0f0c_4080_9491_504cbb5df26c.slice - libcontainer container kubepods-besteffort-pod98a96929_0f0c_4080_9491_504cbb5df26c.slice. Jan 28 01:16:29.119677 kubelet[2942]: I0128 01:16:29.113059 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:16:29.139797 kubelet[2942]: I0128 01:16:29.139578 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:16:29.139797 kubelet[2942]: I0128 01:16:29.139633 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98a96929-0f0c-4080-9491-504cbb5df26c-xtables-lock\") pod \"kube-proxy-72bwx\" (UID: \"98a96929-0f0c-4080-9491-504cbb5df26c\") " pod="kube-system/kube-proxy-72bwx" Jan 28 01:16:29.139797 kubelet[2942]: I0128 01:16:29.139665 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a819234f16ac055e589bf043a7ab3b5a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a819234f16ac055e589bf043a7ab3b5a\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:16:29.139797 kubelet[2942]: I0128 01:16:29.139701 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:16:29.171865 kubelet[2942]: I0128 01:16:29.169491 2942 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:16:29.601572 kubelet[2942]: E0128 01:16:29.601523 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:29.611758 kubelet[2942]: E0128 01:16:29.611725 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:29.637593 kubelet[2942]: E0128 01:16:29.633493 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:29.694262 kubelet[2942]: I0128 01:16:29.689445 2942 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:16:29.694262 kubelet[2942]: I0128 01:16:29.689643 2942 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:16:29.824513 kubelet[2942]: E0128 01:16:29.789258 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:29.827644 containerd[1594]: time="2026-01-28T01:16:29.824752106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72bwx,Uid:98a96929-0f0c-4080-9491-504cbb5df26c,Namespace:kube-system,Attempt:0,}" Jan 28 01:16:30.453812 kubelet[2942]: E0128 01:16:30.446621 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:30.453812 kubelet[2942]: E0128 01:16:30.447437 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:30.453812 kubelet[2942]: E0128 01:16:30.452891 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:30.471391 containerd[1594]: time="2026-01-28T01:16:30.470331601Z" level=info msg="connecting to shim cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f" address="unix:///run/containerd/s/59e4712f83b81476e3ee53deac5a005f46fc993a448d1b54bcb6c526d42c4aa1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:16:31.215548 systemd[1]: Started cri-containerd-cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f.scope - libcontainer container cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f. Jan 28 01:16:31.489578 kubelet[2942]: E0128 01:16:31.486468 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:31.489578 kubelet[2942]: E0128 01:16:31.488370 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:32.147155 containerd[1594]: time="2026-01-28T01:16:32.144230599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72bwx,Uid:98a96929-0f0c-4080-9491-504cbb5df26c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f\"" Jan 28 01:16:32.155335 kubelet[2942]: E0128 01:16:32.153748 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:32.182607 containerd[1594]: time="2026-01-28T01:16:32.182422675Z" level=info msg="CreateContainer within sandbox \"cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:16:32.355770 containerd[1594]: time="2026-01-28T01:16:32.352880119Z" level=info msg="Container 28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:16:32.555724 containerd[1594]: time="2026-01-28T01:16:32.553754859Z" level=info msg="CreateContainer within sandbox \"cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c\"" Jan 28 01:16:32.658159 containerd[1594]: time="2026-01-28T01:16:32.652140249Z" level=info msg="StartContainer for \"28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c\"" Jan 28 01:16:32.829629 containerd[1594]: time="2026-01-28T01:16:32.810884013Z" level=info msg="connecting to shim 28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c" address="unix:///run/containerd/s/59e4712f83b81476e3ee53deac5a005f46fc993a448d1b54bcb6c526d42c4aa1" protocol=ttrpc version=3 Jan 28 01:16:32.893253 kubelet[2942]: E0128 01:16:32.891909 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:33.102615 systemd[1]: Started cri-containerd-28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c.scope - libcontainer container 28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c. Jan 28 01:16:34.031553 containerd[1594]: time="2026-01-28T01:16:34.028826980Z" level=info msg="StartContainer for \"28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c\" returns successfully" Jan 28 01:16:34.747532 systemd[1]: Created slice kubepods-burstable-podbbd09740_6e2a_4501_a65f_76030bcaeb07.slice - libcontainer container kubepods-burstable-podbbd09740_6e2a_4501_a65f_76030bcaeb07.slice. Jan 28 01:16:34.782559 kubelet[2942]: I0128 01:16:34.780411 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbd09740-6e2a-4501-a65f-76030bcaeb07-run\") pod \"kube-flannel-ds-58xmj\" (UID: \"bbd09740-6e2a-4501-a65f-76030bcaeb07\") " pod="kube-flannel/kube-flannel-ds-58xmj" Jan 28 01:16:34.815460 kubelet[2942]: I0128 01:16:34.798593 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bbd09740-6e2a-4501-a65f-76030bcaeb07-flannel-cfg\") pod \"kube-flannel-ds-58xmj\" (UID: \"bbd09740-6e2a-4501-a65f-76030bcaeb07\") " pod="kube-flannel/kube-flannel-ds-58xmj" Jan 28 01:16:34.815460 kubelet[2942]: I0128 01:16:34.815429 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbd09740-6e2a-4501-a65f-76030bcaeb07-xtables-lock\") pod \"kube-flannel-ds-58xmj\" (UID: \"bbd09740-6e2a-4501-a65f-76030bcaeb07\") " pod="kube-flannel/kube-flannel-ds-58xmj" Jan 28 01:16:34.815702 kubelet[2942]: I0128 01:16:34.815483 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2669\" (UniqueName: \"kubernetes.io/projected/bbd09740-6e2a-4501-a65f-76030bcaeb07-kube-api-access-j2669\") pod \"kube-flannel-ds-58xmj\" (UID: \"bbd09740-6e2a-4501-a65f-76030bcaeb07\") " pod="kube-flannel/kube-flannel-ds-58xmj" Jan 28 01:16:34.815702 kubelet[2942]: I0128 01:16:34.815601 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bbd09740-6e2a-4501-a65f-76030bcaeb07-cni-plugin\") pod \"kube-flannel-ds-58xmj\" (UID: \"bbd09740-6e2a-4501-a65f-76030bcaeb07\") " pod="kube-flannel/kube-flannel-ds-58xmj" Jan 28 01:16:34.815702 kubelet[2942]: I0128 01:16:34.815625 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bbd09740-6e2a-4501-a65f-76030bcaeb07-cni\") pod \"kube-flannel-ds-58xmj\" (UID: \"bbd09740-6e2a-4501-a65f-76030bcaeb07\") " pod="kube-flannel/kube-flannel-ds-58xmj" Jan 28 01:16:35.201182 kubelet[2942]: E0128 01:16:35.198622 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:35.453448 kubelet[2942]: E0128 01:16:35.446494 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:35.459898 containerd[1594]: time="2026-01-28T01:16:35.459851654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-58xmj,Uid:bbd09740-6e2a-4501-a65f-76030bcaeb07,Namespace:kube-flannel,Attempt:0,}" Jan 28 01:16:35.722700 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 28 01:16:35.793277 sshd[1778]: Connection closed by 10.0.0.1 port 58798 Jan 28 01:16:35.810677 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 28 01:16:35.865795 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:58798.service: Deactivated successfully. Jan 28 01:16:35.866874 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:16:35.898738 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:16:35.903281 systemd[1]: session-8.scope: Consumed 25.305s CPU time, 218.1M memory peak. Jan 28 01:16:35.911276 containerd[1594]: time="2026-01-28T01:16:35.910824326Z" level=info msg="connecting to shim 254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30" address="unix:///run/containerd/s/15bb6a0626bbdf01d5f274489dff40bd08ce47710bd429fddd8d04a7c1a4e685" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:16:35.914227 systemd-logind[1566]: Removed session 8. Jan 28 01:16:36.309839 kubelet[2942]: E0128 01:16:36.307441 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:36.403854 systemd[1]: Started cri-containerd-254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30.scope - libcontainer container 254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30. Jan 28 01:16:36.484557 kubelet[2942]: E0128 01:16:36.484391 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:36.654733 kubelet[2942]: I0128 01:16:36.649854 2942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72bwx" podStartSLOduration=9.649829391 podStartE2EDuration="9.649829391s" podCreationTimestamp="2026-01-28 01:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:16:35.44251379 +0000 UTC m=+10.181107528" watchObservedRunningTime="2026-01-28 01:16:36.649829391 +0000 UTC m=+11.388423119" Jan 28 01:16:36.924262 containerd[1594]: time="2026-01-28T01:16:36.922814739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-58xmj,Uid:bbd09740-6e2a-4501-a65f-76030bcaeb07,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\"" Jan 28 01:16:36.934247 kubelet[2942]: E0128 01:16:36.932634 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:36.946499 containerd[1594]: time="2026-01-28T01:16:36.943662044Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 28 01:16:37.326462 kubelet[2942]: E0128 01:16:37.326343 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:38.380907 kubelet[2942]: E0128 01:16:38.380861 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:39.458441 kubelet[2942]: E0128 01:16:39.458408 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:44.138421 kubelet[2942]: E0128 01:16:44.138210 2942 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.134s" Jan 28 01:16:44.994852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638263974.mount: Deactivated successfully. Jan 28 01:16:45.487430 containerd[1594]: time="2026-01-28T01:16:45.486547537Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:16:45.500603 containerd[1594]: time="2026-01-28T01:16:45.498844785Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=2829605" Jan 28 01:16:45.507838 containerd[1594]: time="2026-01-28T01:16:45.507807902Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:16:45.528595 containerd[1594]: time="2026-01-28T01:16:45.528550855Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:16:45.535512 containerd[1594]: time="2026-01-28T01:16:45.535382518Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 8.591557701s" Jan 28 01:16:45.535512 containerd[1594]: time="2026-01-28T01:16:45.535419559Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 28 01:16:45.558438 containerd[1594]: time="2026-01-28T01:16:45.557471233Z" level=info msg="CreateContainer within sandbox \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 28 01:16:45.642758 containerd[1594]: time="2026-01-28T01:16:45.640783246Z" level=info msg="Container 1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:16:45.644766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859982702.mount: Deactivated successfully. Jan 28 01:16:45.752391 containerd[1594]: time="2026-01-28T01:16:45.749768578Z" level=info msg="CreateContainer within sandbox \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110\"" Jan 28 01:16:45.779388 containerd[1594]: time="2026-01-28T01:16:45.778760421Z" level=info msg="StartContainer for \"1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110\"" Jan 28 01:16:45.807766 containerd[1594]: time="2026-01-28T01:16:45.788655225Z" level=info msg="connecting to shim 1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110" address="unix:///run/containerd/s/15bb6a0626bbdf01d5f274489dff40bd08ce47710bd429fddd8d04a7c1a4e685" protocol=ttrpc version=3 Jan 28 01:16:46.000670 systemd[1]: Started cri-containerd-1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110.scope - libcontainer container 1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110. Jan 28 01:16:46.521395 systemd[1]: cri-containerd-1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110.scope: Deactivated successfully. Jan 28 01:16:46.543374 containerd[1594]: time="2026-01-28T01:16:46.542504411Z" level=info msg="StartContainer for \"1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110\" returns successfully" Jan 28 01:16:46.857774 containerd[1594]: time="2026-01-28T01:16:46.856277518Z" level=info msg="received container exit event container_id:\"1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110\" id:\"1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110\" pid:3272 exited_at:{seconds:1769563006 nanos:684772023}" Jan 28 01:16:47.612756 kubelet[2942]: E0128 01:16:47.610802 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:47.728612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1caa626b13cac2ad496a9fb11a71097032906f872af870141f7b67d3032fb110-rootfs.mount: Deactivated successfully. Jan 28 01:16:48.693235 kubelet[2942]: E0128 01:16:48.692910 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:16:48.710230 containerd[1594]: time="2026-01-28T01:16:48.709897187Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 28 01:16:55.927849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745506312.mount: Deactivated successfully. Jan 28 01:17:18.737748 systemd[1671]: Created slice background.slice - User Background Tasks Slice. Jan 28 01:17:18.795244 systemd[1671]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 28 01:17:19.246910 systemd[1671]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 28 01:17:28.626758 containerd[1594]: time="2026-01-28T01:17:28.620247581Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:28.680616 containerd[1594]: time="2026-01-28T01:17:28.643353727Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=25890682" Jan 28 01:17:28.724442 containerd[1594]: time="2026-01-28T01:17:28.720295237Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:28.785740 containerd[1594]: time="2026-01-28T01:17:28.785683299Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:28.818762 containerd[1594]: time="2026-01-28T01:17:28.818703920Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 40.107796227s" Jan 28 01:17:28.819568 containerd[1594]: time="2026-01-28T01:17:28.819531901Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 28 01:17:28.884478 containerd[1594]: time="2026-01-28T01:17:28.873145730Z" level=info msg="CreateContainer within sandbox \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:17:29.257171 containerd[1594]: time="2026-01-28T01:17:29.253739550Z" level=info msg="Container e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:17:29.344363 containerd[1594]: time="2026-01-28T01:17:29.342282052Z" level=info msg="CreateContainer within sandbox \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994\"" Jan 28 01:17:29.346621 containerd[1594]: time="2026-01-28T01:17:29.346483885Z" level=info msg="StartContainer for \"e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994\"" Jan 28 01:17:29.362846 containerd[1594]: time="2026-01-28T01:17:29.360674248Z" level=info msg="connecting to shim e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994" address="unix:///run/containerd/s/15bb6a0626bbdf01d5f274489dff40bd08ce47710bd429fddd8d04a7c1a4e685" protocol=ttrpc version=3 Jan 28 01:17:29.706455 systemd[1]: Started cri-containerd-e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994.scope - libcontainer container e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994. Jan 28 01:17:30.511594 systemd[1]: cri-containerd-e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994.scope: Deactivated successfully. Jan 28 01:17:30.532541 containerd[1594]: time="2026-01-28T01:17:30.531840859Z" level=info msg="received container exit event container_id:\"e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994\" id:\"e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994\" pid:3376 exited_at:{seconds:1769563050 nanos:515698875}" Jan 28 01:17:30.549850 containerd[1594]: time="2026-01-28T01:17:30.549811671Z" level=info msg="StartContainer for \"e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994\" returns successfully" Jan 28 01:17:30.565904 kubelet[2942]: I0128 01:17:30.554408 2942 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:17:31.247870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51e7e543f6f38e81676e20f29b0d0b091a5a6ed49502bfa0e466acd2f3a3994-rootfs.mount: Deactivated successfully. Jan 28 01:17:31.279749 systemd[1]: Created slice kubepods-burstable-podb6ec76db_6158_4fb2_b646_481e788ea9ed.slice - libcontainer container kubepods-burstable-podb6ec76db_6158_4fb2_b646_481e788ea9ed.slice. Jan 28 01:17:31.334693 kubelet[2942]: I0128 01:17:31.322258 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhb5\" (UniqueName: \"kubernetes.io/projected/cd342cfc-7215-4ace-9397-25260f45fe50-kube-api-access-6qhb5\") pod \"coredns-668d6bf9bc-mtr7g\" (UID: \"cd342cfc-7215-4ace-9397-25260f45fe50\") " pod="kube-system/coredns-668d6bf9bc-mtr7g" Jan 28 01:17:31.335657 kubelet[2942]: I0128 01:17:31.335628 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6ec76db-6158-4fb2-b646-481e788ea9ed-config-volume\") pod \"coredns-668d6bf9bc-7kb6t\" (UID: \"b6ec76db-6158-4fb2-b646-481e788ea9ed\") " pod="kube-system/coredns-668d6bf9bc-7kb6t" Jan 28 01:17:31.337130 kubelet[2942]: I0128 01:17:31.337095 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqsw5\" (UniqueName: \"kubernetes.io/projected/b6ec76db-6158-4fb2-b646-481e788ea9ed-kube-api-access-tqsw5\") pod \"coredns-668d6bf9bc-7kb6t\" (UID: \"b6ec76db-6158-4fb2-b646-481e788ea9ed\") " pod="kube-system/coredns-668d6bf9bc-7kb6t" Jan 28 01:17:31.337283 kubelet[2942]: I0128 01:17:31.337259 2942 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd342cfc-7215-4ace-9397-25260f45fe50-config-volume\") pod \"coredns-668d6bf9bc-mtr7g\" (UID: \"cd342cfc-7215-4ace-9397-25260f45fe50\") " pod="kube-system/coredns-668d6bf9bc-mtr7g" Jan 28 01:17:31.341872 systemd[1]: Created slice kubepods-burstable-podcd342cfc_7215_4ace_9397_25260f45fe50.slice - libcontainer container kubepods-burstable-podcd342cfc_7215_4ace_9397_25260f45fe50.slice. Jan 28 01:17:31.463708 kubelet[2942]: E0128 01:17:31.462212 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:31.478254 containerd[1594]: time="2026-01-28T01:17:31.477885267Z" level=info msg="CreateContainer within sandbox \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 28 01:17:31.613445 containerd[1594]: time="2026-01-28T01:17:31.609257501Z" level=info msg="Container c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:17:31.639352 kubelet[2942]: E0128 01:17:31.636726 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:31.642727 containerd[1594]: time="2026-01-28T01:17:31.642299601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kb6t,Uid:b6ec76db-6158-4fb2-b646-481e788ea9ed,Namespace:kube-system,Attempt:0,}" Jan 28 01:17:31.688365 kubelet[2942]: E0128 01:17:31.688293 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:31.723886 containerd[1594]: time="2026-01-28T01:17:31.723816948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtr7g,Uid:cd342cfc-7215-4ace-9397-25260f45fe50,Namespace:kube-system,Attempt:0,}" Jan 28 01:17:31.734352 containerd[1594]: time="2026-01-28T01:17:31.725329015Z" level=info msg="CreateContainer within sandbox \"254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736\"" Jan 28 01:17:31.756137 containerd[1594]: time="2026-01-28T01:17:31.751882825Z" level=info msg="StartContainer for \"c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736\"" Jan 28 01:17:31.756137 containerd[1594]: time="2026-01-28T01:17:31.755788772Z" level=info msg="connecting to shim c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736" address="unix:///run/containerd/s/15bb6a0626bbdf01d5f274489dff40bd08ce47710bd429fddd8d04a7c1a4e685" protocol=ttrpc version=3 Jan 28 01:17:32.073667 systemd[1]: Started cri-containerd-c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736.scope - libcontainer container c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736. Jan 28 01:17:32.084900 containerd[1594]: time="2026-01-28T01:17:32.083170937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtr7g,Uid:cd342cfc-7215-4ace-9397-25260f45fe50,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44aa646d387f9653952a46f3b3390f521f3884b1ad54b0e22e7476ce15e35fa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:17:32.107167 kubelet[2942]: E0128 01:17:32.103252 2942 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44aa646d387f9653952a46f3b3390f521f3884b1ad54b0e22e7476ce15e35fa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:17:32.107167 kubelet[2942]: E0128 01:17:32.103361 2942 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44aa646d387f9653952a46f3b3390f521f3884b1ad54b0e22e7476ce15e35fa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-mtr7g" Jan 28 01:17:32.107167 kubelet[2942]: E0128 01:17:32.103395 2942 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44aa646d387f9653952a46f3b3390f521f3884b1ad54b0e22e7476ce15e35fa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-mtr7g" Jan 28 01:17:32.107167 kubelet[2942]: E0128 01:17:32.103448 2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mtr7g_kube-system(cd342cfc-7215-4ace-9397-25260f45fe50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mtr7g_kube-system(cd342cfc-7215-4ace-9397-25260f45fe50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f44aa646d387f9653952a46f3b3390f521f3884b1ad54b0e22e7476ce15e35fa\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-mtr7g" podUID="cd342cfc-7215-4ace-9397-25260f45fe50" Jan 28 01:17:32.152227 containerd[1594]: time="2026-01-28T01:17:32.151555395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kb6t,Uid:b6ec76db-6158-4fb2-b646-481e788ea9ed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf55999f44336a8ea2b5d1a2bca8213cca0625a4ef179a8723537e258a64667\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:17:32.152565 kubelet[2942]: E0128 01:17:32.152502 2942 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf55999f44336a8ea2b5d1a2bca8213cca0625a4ef179a8723537e258a64667\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:17:32.152565 kubelet[2942]: E0128 01:17:32.152685 2942 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf55999f44336a8ea2b5d1a2bca8213cca0625a4ef179a8723537e258a64667\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7kb6t" Jan 28 01:17:32.152565 kubelet[2942]: E0128 01:17:32.152719 2942 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf55999f44336a8ea2b5d1a2bca8213cca0625a4ef179a8723537e258a64667\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7kb6t" Jan 28 01:17:32.155406 kubelet[2942]: E0128 01:17:32.152771 2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7kb6t_kube-system(b6ec76db-6158-4fb2-b646-481e788ea9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7kb6t_kube-system(b6ec76db-6158-4fb2-b646-481e788ea9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cf55999f44336a8ea2b5d1a2bca8213cca0625a4ef179a8723537e258a64667\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-7kb6t" podUID="b6ec76db-6158-4fb2-b646-481e788ea9ed" Jan 28 01:17:33.042850 containerd[1594]: time="2026-01-28T01:17:33.037596993Z" level=info msg="StartContainer for \"c3f069cd5776baa79caedb4da8cddd872b4e8316435601273adeef056f6b7736\" returns successfully" Jan 28 01:17:33.774367 kubelet[2942]: E0128 01:17:33.774329 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:34.799865 kubelet[2942]: E0128 01:17:34.791206 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:35.100551 systemd-networkd[1487]: flannel.1: Link UP Jan 28 01:17:35.100566 systemd-networkd[1487]: flannel.1: Gained carrier Jan 28 01:17:36.162737 systemd-networkd[1487]: flannel.1: Gained IPv6LL Jan 28 01:17:40.971416 kubelet[2942]: E0128 01:17:40.963623 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:40.993688 kubelet[2942]: E0128 01:17:40.993530 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:41.990763 kubelet[2942]: E0128 01:17:41.970194 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:41.995609 kubelet[2942]: E0128 01:17:41.995472 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:42.966283 kubelet[2942]: E0128 01:17:42.962329 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:42.971100 containerd[1594]: time="2026-01-28T01:17:42.967087854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kb6t,Uid:b6ec76db-6158-4fb2-b646-481e788ea9ed,Namespace:kube-system,Attempt:0,}" Jan 28 01:17:43.291668 systemd-networkd[1487]: cni0: Link UP Jan 28 01:17:43.291683 systemd-networkd[1487]: cni0: Gained carrier Jan 28 01:17:43.311873 systemd-networkd[1487]: cni0: Lost carrier Jan 28 01:17:43.800270 systemd-networkd[1487]: veth115dd5fe: Link UP Jan 28 01:17:43.853870 kernel: cni0: port 1(veth115dd5fe) entered blocking state Jan 28 01:17:43.857487 kernel: cni0: port 1(veth115dd5fe) entered disabled state Jan 28 01:17:43.887658 kernel: veth115dd5fe: entered allmulticast mode Jan 28 01:17:43.901559 kernel: veth115dd5fe: entered promiscuous mode Jan 28 01:17:43.966250 kernel: cni0: port 1(veth115dd5fe) entered blocking state Jan 28 01:17:43.966596 kernel: cni0: port 1(veth115dd5fe) entered forwarding state Jan 28 01:17:43.966364 systemd-networkd[1487]: veth115dd5fe: Gained carrier Jan 28 01:17:43.970533 systemd-networkd[1487]: cni0: Gained carrier Jan 28 01:17:44.013287 containerd[1594]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 28 01:17:44.013287 containerd[1594]: delegateAdd: netconf sent to delegate plugin: Jan 28 01:17:44.432384 containerd[1594]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-28T01:17:44.432083258Z" level=info msg="connecting to shim 152256252b64ae6a6987beb213a48793f7e33d795900b57efea13ccc0d937bc9" address="unix:///run/containerd/s/22b6e700f6ea9b8cae079b22a4b9634a2b5450efce62d2ee348a58531db738a9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:17:44.783211 systemd[1]: Started cri-containerd-152256252b64ae6a6987beb213a48793f7e33d795900b57efea13ccc0d937bc9.scope - libcontainer container 152256252b64ae6a6987beb213a48793f7e33d795900b57efea13ccc0d937bc9. Jan 28 01:17:44.803271 systemd-networkd[1487]: cni0: Gained IPv6LL Jan 28 01:17:44.999531 kubelet[2942]: E0128 01:17:44.981648 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:45.132629 containerd[1594]: time="2026-01-28T01:17:45.131416836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtr7g,Uid:cd342cfc-7215-4ace-9397-25260f45fe50,Namespace:kube-system,Attempt:0,}" Jan 28 01:17:45.358384 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:17:45.800692 systemd-networkd[1487]: veth115dd5fe: Gained IPv6LL Jan 28 01:17:45.852198 systemd-networkd[1487]: vetha81d6cb1: Link UP Jan 28 01:17:46.284344 kernel: cni0: port 2(vetha81d6cb1) entered blocking state Jan 28 01:17:46.289597 kernel: cni0: port 2(vetha81d6cb1) entered disabled state Jan 28 01:17:46.289662 kernel: vetha81d6cb1: entered allmulticast mode Jan 28 01:17:46.291245 kernel: vetha81d6cb1: entered promiscuous mode Jan 28 01:17:47.111538 kernel: cni0: port 2(vetha81d6cb1) entered blocking state Jan 28 01:17:47.112341 kernel: cni0: port 2(vetha81d6cb1) entered forwarding state Jan 28 01:17:47.117618 systemd-networkd[1487]: vetha81d6cb1: Gained carrier Jan 28 01:17:47.163157 containerd[1594]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000020938), "name":"cbr0", "type":"bridge"} Jan 28 01:17:47.163157 containerd[1594]: delegateAdd: netconf sent to delegate plugin: Jan 28 01:17:47.251260 containerd[1594]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-28T01:17:47.250207060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kb6t,Uid:b6ec76db-6158-4fb2-b646-481e788ea9ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"152256252b64ae6a6987beb213a48793f7e33d795900b57efea13ccc0d937bc9\"" Jan 28 01:17:47.270054 kubelet[2942]: E0128 01:17:47.269590 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:47.310066 containerd[1594]: time="2026-01-28T01:17:47.309479610Z" level=info msg="CreateContainer within sandbox \"152256252b64ae6a6987beb213a48793f7e33d795900b57efea13ccc0d937bc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:17:47.647875 containerd[1594]: time="2026-01-28T01:17:47.645376326Z" level=info msg="Container 798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:17:47.656850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159318985.mount: Deactivated successfully. Jan 28 01:17:48.053897 containerd[1594]: time="2026-01-28T01:17:48.053733100Z" level=info msg="CreateContainer within sandbox \"152256252b64ae6a6987beb213a48793f7e33d795900b57efea13ccc0d937bc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5\"" Jan 28 01:17:48.077289 containerd[1594]: time="2026-01-28T01:17:48.077234118Z" level=info msg="StartContainer for \"798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5\"" Jan 28 01:17:48.104529 containerd[1594]: time="2026-01-28T01:17:48.104095933Z" level=info msg="connecting to shim 798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5" address="unix:///run/containerd/s/22b6e700f6ea9b8cae079b22a4b9634a2b5450efce62d2ee348a58531db738a9" protocol=ttrpc version=3 Jan 28 01:17:48.262575 containerd[1594]: time="2026-01-28T01:17:48.262519398Z" level=info msg="connecting to shim de9c318180be2e3e8de3d385e634f52806294612d2987837fd691f16b2d4c17a" address="unix:///run/containerd/s/8db406f0996cef0dd7cf285852ad7ac4c5c3e3345d0760d46b35688c18ada118" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:17:48.436120 systemd[1]: Started cri-containerd-798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5.scope - libcontainer container 798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5. Jan 28 01:17:48.648707 systemd[1]: Started cri-containerd-de9c318180be2e3e8de3d385e634f52806294612d2987837fd691f16b2d4c17a.scope - libcontainer container de9c318180be2e3e8de3d385e634f52806294612d2987837fd691f16b2d4c17a. Jan 28 01:17:48.868869 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:17:48.900098 systemd-networkd[1487]: vetha81d6cb1: Gained IPv6LL Jan 28 01:17:49.328101 containerd[1594]: time="2026-01-28T01:17:49.320510989Z" level=info msg="StartContainer for \"798059a41471679b0ff0dbc20dc375458522af2aff6ef0d7f83674198efe53d5\" returns successfully" Jan 28 01:17:49.834783 containerd[1594]: time="2026-01-28T01:17:49.833719450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtr7g,Uid:cd342cfc-7215-4ace-9397-25260f45fe50,Namespace:kube-system,Attempt:0,} returns sandbox id \"de9c318180be2e3e8de3d385e634f52806294612d2987837fd691f16b2d4c17a\"" Jan 28 01:17:49.875819 kubelet[2942]: E0128 01:17:49.875781 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:50.181683 kubelet[2942]: E0128 01:17:50.170653 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:50.271175 containerd[1594]: time="2026-01-28T01:17:50.267731113Z" level=info msg="CreateContainer within sandbox \"de9c318180be2e3e8de3d385e634f52806294612d2987837fd691f16b2d4c17a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:17:50.816341 kubelet[2942]: I0128 01:17:50.751575 2942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-58xmj" podStartSLOduration=24.848710367 podStartE2EDuration="1m16.751551556s" podCreationTimestamp="2026-01-28 01:16:34 +0000 UTC" firstStartedPulling="2026-01-28 01:16:36.939778769 +0000 UTC m=+11.678372485" lastFinishedPulling="2026-01-28 01:17:28.842619957 +0000 UTC m=+63.581213674" observedRunningTime="2026-01-28 01:17:34.423576488 +0000 UTC m=+69.162170296" watchObservedRunningTime="2026-01-28 01:17:50.751551556 +0000 UTC m=+85.490145273" Jan 28 01:17:50.816341 kubelet[2942]: I0128 01:17:50.751728 2942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7kb6t" podStartSLOduration=83.751718733 podStartE2EDuration="1m23.751718733s" podCreationTimestamp="2026-01-28 01:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:17:50.631558422 +0000 UTC m=+85.370152139" watchObservedRunningTime="2026-01-28 01:17:50.751718733 +0000 UTC m=+85.490312490" Jan 28 01:17:50.817156 containerd[1594]: time="2026-01-28T01:17:50.805336156Z" level=info msg="Container d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:17:50.969869 containerd[1594]: time="2026-01-28T01:17:50.969812418Z" level=info msg="CreateContainer within sandbox \"de9c318180be2e3e8de3d385e634f52806294612d2987837fd691f16b2d4c17a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95\"" Jan 28 01:17:51.123800 containerd[1594]: time="2026-01-28T01:17:51.104265850Z" level=info msg="StartContainer for \"d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95\"" Jan 28 01:17:51.175579 containerd[1594]: time="2026-01-28T01:17:51.170210667Z" level=info msg="connecting to shim d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95" address="unix:///run/containerd/s/8db406f0996cef0dd7cf285852ad7ac4c5c3e3345d0760d46b35688c18ada118" protocol=ttrpc version=3 Jan 28 01:17:51.425253 kubelet[2942]: E0128 01:17:51.418574 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:51.880422 systemd[1]: Started cri-containerd-d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95.scope - libcontainer container d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95. Jan 28 01:17:52.580049 containerd[1594]: time="2026-01-28T01:17:52.576743105Z" level=info msg="StartContainer for \"d1e994e823d1da6df24c44d34a93760869e9babbb13ac909afe02ac26e5d8d95\" returns successfully" Jan 28 01:17:53.518298 kubelet[2942]: E0128 01:17:53.513486 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:53.907440 kubelet[2942]: I0128 01:17:53.895115 2942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mtr7g" podStartSLOduration=86.894898686 podStartE2EDuration="1m26.894898686s" podCreationTimestamp="2026-01-28 01:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:17:53.835138415 +0000 UTC m=+88.573732131" watchObservedRunningTime="2026-01-28 01:17:53.894898686 +0000 UTC m=+88.633492402" Jan 28 01:17:54.589276 kubelet[2942]: E0128 01:17:54.588515 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:17:55.609394 kubelet[2942]: E0128 01:17:55.608642 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:01.432106 kubelet[2942]: E0128 01:18:01.431838 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:30.690412 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:39894.service - OpenSSH per-connection server daemon (10.0.0.1:39894). Jan 28 01:18:31.455567 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 39894 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:18:31.465171 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:31.541330 systemd-logind[1566]: New session 9 of user core. Jan 28 01:18:31.560216 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:18:33.083791 sshd[4052]: Connection closed by 10.0.0.1 port 39894 Jan 28 01:18:33.088678 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:33.162327 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:39894.service: Deactivated successfully. Jan 28 01:18:33.196659 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:18:33.212516 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:18:33.242433 systemd-logind[1566]: Removed session 9. Jan 28 01:18:38.179232 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:50682.service - OpenSSH per-connection server daemon (10.0.0.1:50682). Jan 28 01:18:38.762298 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 50682 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:18:38.778169 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:38.836757 systemd-logind[1566]: New session 10 of user core. Jan 28 01:18:38.876450 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:18:40.686241 sshd[4096]: Connection closed by 10.0.0.1 port 50682 Jan 28 01:18:40.689831 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:40.705764 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:50682.service: Deactivated successfully. Jan 28 01:18:40.741635 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:18:40.777872 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:18:40.818277 systemd-logind[1566]: Removed session 10. Jan 28 01:18:45.928713 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:45458.service - OpenSSH per-connection server daemon (10.0.0.1:45458). Jan 28 01:18:46.843279 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 45458 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:18:46.880498 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:46.958306 systemd-logind[1566]: New session 11 of user core. Jan 28 01:18:46.996294 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:18:48.429134 sshd[4155]: Connection closed by 10.0.0.1 port 45458 Jan 28 01:18:48.430477 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:48.474596 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:45458.service: Deactivated successfully. Jan 28 01:18:48.474662 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:18:48.505690 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:18:48.525602 systemd-logind[1566]: Removed session 11. Jan 28 01:18:51.991613 kubelet[2942]: E0128 01:18:51.987418 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:52.029758 kubelet[2942]: E0128 01:18:52.028791 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:53.474471 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:34440.service - OpenSSH per-connection server daemon (10.0.0.1:34440). Jan 28 01:18:53.885326 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 34440 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:18:53.911670 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:18:53.977683 kubelet[2942]: E0128 01:18:53.970190 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:54.046638 systemd-logind[1566]: New session 12 of user core. Jan 28 01:18:54.066475 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:18:55.585481 sshd[4195]: Connection closed by 10.0.0.1 port 34440 Jan 28 01:18:55.585450 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jan 28 01:18:55.677587 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:34440.service: Deactivated successfully. Jan 28 01:18:55.699688 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:18:55.723549 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:18:55.750868 systemd-logind[1566]: Removed session 12. Jan 28 01:19:00.737545 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:34442.service - OpenSSH per-connection server daemon (10.0.0.1:34442). Jan 28 01:19:01.230208 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 34442 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:01.233861 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:01.276699 systemd-logind[1566]: New session 13 of user core. Jan 28 01:19:01.315558 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:19:02.982395 kubelet[2942]: E0128 01:19:02.979901 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:03.210772 sshd[4241]: Connection closed by 10.0.0.1 port 34442 Jan 28 01:19:03.209430 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:03.278821 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:34442.service: Deactivated successfully. Jan 28 01:19:03.303523 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:19:03.334327 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:19:03.348159 systemd-logind[1566]: Removed session 13. Jan 28 01:19:03.971410 kubelet[2942]: E0128 01:19:03.970719 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:06.980503 kubelet[2942]: E0128 01:19:06.977609 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:08.266489 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:33950.service - OpenSSH per-connection server daemon (10.0.0.1:33950). Jan 28 01:19:09.334228 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 33950 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:09.368395 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:09.507490 systemd-logind[1566]: New session 14 of user core. Jan 28 01:19:09.525472 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:19:10.998606 sshd[4300]: Connection closed by 10.0.0.1 port 33950 Jan 28 01:19:11.002439 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:11.049692 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:33950.service: Deactivated successfully. Jan 28 01:19:11.070415 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:19:11.078674 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:19:11.095659 systemd-logind[1566]: Removed session 14. Jan 28 01:19:11.968454 kubelet[2942]: E0128 01:19:11.961260 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:16.281784 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:59116.service - OpenSSH per-connection server daemon (10.0.0.1:59116). Jan 28 01:19:16.722886 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 59116 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:16.733483 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:16.863432 systemd-logind[1566]: New session 15 of user core. Jan 28 01:19:16.908525 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:19:18.487436 sshd[4343]: Connection closed by 10.0.0.1 port 59116 Jan 28 01:19:18.489498 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:18.528779 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:59116.service: Deactivated successfully. Jan 28 01:19:18.534518 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:19:18.543469 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:19:18.574545 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:59124.service - OpenSSH per-connection server daemon (10.0.0.1:59124). Jan 28 01:19:18.582741 systemd-logind[1566]: Removed session 15. Jan 28 01:19:19.227613 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 59124 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:19.248620 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:19.504789 systemd-logind[1566]: New session 16 of user core. Jan 28 01:19:19.598505 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:19:21.800608 sshd[4381]: Connection closed by 10.0.0.1 port 59124 Jan 28 01:19:21.797215 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:21.920856 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:59124.service: Deactivated successfully. Jan 28 01:19:22.014555 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:19:22.045758 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:19:22.077812 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:59140.service - OpenSSH per-connection server daemon (10.0.0.1:59140). Jan 28 01:19:22.088310 systemd-logind[1566]: Removed session 16. Jan 28 01:19:22.611329 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 59140 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:22.637776 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:22.763686 systemd-logind[1566]: New session 17 of user core. Jan 28 01:19:22.779144 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:19:24.353404 sshd[4399]: Connection closed by 10.0.0.1 port 59140 Jan 28 01:19:24.356593 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:24.421454 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:59140.service: Deactivated successfully. Jan 28 01:19:24.661817 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:19:24.697406 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:19:24.703666 systemd-logind[1566]: Removed session 17. Jan 28 01:19:29.508531 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:34288.service - OpenSSH per-connection server daemon (10.0.0.1:34288). Jan 28 01:19:30.009442 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 34288 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:30.015141 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:30.065384 systemd-logind[1566]: New session 18 of user core. Jan 28 01:19:30.092303 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:19:30.679789 sshd[4457]: Connection closed by 10.0.0.1 port 34288 Jan 28 01:19:30.682160 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:30.736876 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:34288.service: Deactivated successfully. Jan 28 01:19:30.747256 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:19:30.753430 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:19:30.765627 systemd-logind[1566]: Removed session 18. Jan 28 01:19:35.750386 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:60650.service - OpenSSH per-connection server daemon (10.0.0.1:60650). Jan 28 01:19:36.171275 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 60650 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:36.189657 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:36.220095 systemd-logind[1566]: New session 19 of user core. Jan 28 01:19:36.245722 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:19:36.832050 sshd[4495]: Connection closed by 10.0.0.1 port 60650 Jan 28 01:19:36.834325 sshd-session[4491]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:36.856794 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:60650.service: Deactivated successfully. Jan 28 01:19:36.870569 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:19:36.888897 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:19:36.895235 systemd-logind[1566]: Removed session 19. Jan 28 01:19:41.894575 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:60664.service - OpenSSH per-connection server daemon (10.0.0.1:60664). Jan 28 01:19:42.200822 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 60664 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:42.207332 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:42.244462 systemd-logind[1566]: New session 20 of user core. Jan 28 01:19:42.294651 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:19:42.954315 sshd[4536]: Connection closed by 10.0.0.1 port 60664 Jan 28 01:19:42.948673 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:42.961655 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:60664.service: Deactivated successfully. Jan 28 01:19:42.966382 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:19:42.973836 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:19:42.980768 systemd-logind[1566]: Removed session 20. Jan 28 01:19:48.069729 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:60426.service - OpenSSH per-connection server daemon (10.0.0.1:60426). Jan 28 01:19:48.532715 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 60426 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:48.546862 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:48.602480 systemd-logind[1566]: New session 21 of user core. Jan 28 01:19:48.623477 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:19:49.254183 sshd[4574]: Connection closed by 10.0.0.1 port 60426 Jan 28 01:19:49.255230 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:49.299412 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:19:49.311038 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:60426.service: Deactivated successfully. Jan 28 01:19:49.349078 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:19:49.404824 systemd-logind[1566]: Removed session 21. Jan 28 01:19:54.332277 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:49002.service - OpenSSH per-connection server daemon (10.0.0.1:49002). Jan 28 01:19:54.657820 sshd[4609]: Accepted publickey for core from 10.0.0.1 port 49002 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:19:54.663271 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:19:54.714787 systemd-logind[1566]: New session 22 of user core. Jan 28 01:19:54.755119 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:19:55.237454 sshd[4619]: Connection closed by 10.0.0.1 port 49002 Jan 28 01:19:55.235144 sshd-session[4609]: pam_unix(sshd:session): session closed for user core Jan 28 01:19:55.254733 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:49002.service: Deactivated successfully. Jan 28 01:19:55.259689 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:19:55.272185 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:19:55.278770 systemd-logind[1566]: Removed session 22. Jan 28 01:20:00.301428 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:49014.service - OpenSSH per-connection server daemon (10.0.0.1:49014). Jan 28 01:20:00.589300 sshd[4654]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:00.590402 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:00.634026 systemd-logind[1566]: New session 23 of user core. Jan 28 01:20:00.646262 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:20:01.022796 sshd[4658]: Connection closed by 10.0.0.1 port 49014 Jan 28 01:20:01.023312 sshd-session[4654]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:01.057900 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:49014.service: Deactivated successfully. Jan 28 01:20:01.065501 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:20:01.070602 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:20:01.081372 systemd-logind[1566]: Removed session 23. Jan 28 01:20:06.111409 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:47328.service - OpenSSH per-connection server daemon (10.0.0.1:47328). Jan 28 01:20:06.373833 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:06.390280 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:06.438114 systemd-logind[1566]: New session 24 of user core. Jan 28 01:20:06.462801 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:20:06.960046 kubelet[2942]: E0128 01:20:06.959161 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:06.961338 kubelet[2942]: E0128 01:20:06.961314 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:06.979037 sshd[4712]: Connection closed by 10.0.0.1 port 47328 Jan 28 01:20:06.980257 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:07.020810 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:47328.service: Deactivated successfully. Jan 28 01:20:07.026595 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:20:07.031260 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:20:07.038485 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:47340.service - OpenSSH per-connection server daemon (10.0.0.1:47340). Jan 28 01:20:07.040295 systemd-logind[1566]: Removed session 24. Jan 28 01:20:07.347066 sshd[4725]: Accepted publickey for core from 10.0.0.1 port 47340 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:07.360067 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:07.427876 systemd-logind[1566]: New session 25 of user core. Jan 28 01:20:07.444708 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:20:09.229217 sshd[4729]: Connection closed by 10.0.0.1 port 47340 Jan 28 01:20:09.227856 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:09.288757 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:47340.service: Deactivated successfully. Jan 28 01:20:09.313554 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:20:09.325743 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:20:09.343867 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:47346.service - OpenSSH per-connection server daemon (10.0.0.1:47346). Jan 28 01:20:09.355518 systemd-logind[1566]: Removed session 25. Jan 28 01:20:09.569573 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 47346 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:09.583811 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:09.640245 systemd-logind[1566]: New session 26 of user core. Jan 28 01:20:09.649435 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:20:14.064028 kubelet[2942]: E0128 01:20:14.062389 2942 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.695s" Jan 28 01:20:14.077706 kubelet[2942]: E0128 01:20:14.077314 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:15.730077 sshd[4747]: Connection closed by 10.0.0.1 port 47346 Jan 28 01:20:15.732585 sshd-session[4743]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:15.787611 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:47346.service: Deactivated successfully. Jan 28 01:20:15.798832 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:20:15.799692 systemd[1]: session-26.scope: Consumed 1.584s CPU time, 33.7M memory peak. Jan 28 01:20:15.834458 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:20:15.849535 systemd-logind[1566]: Removed session 26. Jan 28 01:20:15.870677 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). Jan 28 01:20:16.234780 sshd[4795]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:16.245804 sshd-session[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:16.295474 systemd-logind[1566]: New session 27 of user core. Jan 28 01:20:16.326023 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:20:17.654653 sshd[4799]: Connection closed by 10.0.0.1 port 43676 Jan 28 01:20:17.667824 sshd-session[4795]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:17.723795 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:43676.service: Deactivated successfully. Jan 28 01:20:17.743388 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:20:17.755146 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:20:17.761811 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:43678.service - OpenSSH per-connection server daemon (10.0.0.1:43678). Jan 28 01:20:17.775792 systemd-logind[1566]: Removed session 27. Jan 28 01:20:18.050293 sshd[4810]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:18.062736 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:18.112542 systemd-logind[1566]: New session 28 of user core. Jan 28 01:20:18.132906 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:20:18.818017 sshd[4814]: Connection closed by 10.0.0.1 port 43678 Jan 28 01:20:18.820524 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:18.862250 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:43678.service: Deactivated successfully. Jan 28 01:20:18.885763 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:20:18.902630 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:20:18.916757 systemd-logind[1566]: Removed session 28. Jan 28 01:20:20.959175 kubelet[2942]: E0128 01:20:20.958816 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:21.965855 kubelet[2942]: E0128 01:20:21.961699 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:23.848270 systemd[1]: Started sshd@27-10.0.0.18:22-10.0.0.1:59788.service - OpenSSH per-connection server daemon (10.0.0.1:59788). Jan 28 01:20:24.060873 sshd[4849]: Accepted publickey for core from 10.0.0.1 port 59788 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:24.092396 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:24.155210 systemd-logind[1566]: New session 29 of user core. Jan 28 01:20:24.187876 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:20:24.693883 sshd[4853]: Connection closed by 10.0.0.1 port 59788 Jan 28 01:20:24.696289 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:24.742389 systemd[1]: sshd@27-10.0.0.18:22-10.0.0.1:59788.service: Deactivated successfully. Jan 28 01:20:24.794547 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:20:24.817762 systemd-logind[1566]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:20:24.824327 systemd-logind[1566]: Removed session 29. Jan 28 01:20:29.889252 systemd[1]: Started sshd@28-10.0.0.18:22-10.0.0.1:59796.service - OpenSSH per-connection server daemon (10.0.0.1:59796). Jan 28 01:20:30.022110 kubelet[2942]: E0128 01:20:30.020094 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:30.476649 sshd[4901]: Accepted publickey for core from 10.0.0.1 port 59796 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:30.496222 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:30.576216 systemd-logind[1566]: New session 30 of user core. Jan 28 01:20:30.594242 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:20:32.310126 sshd[4915]: Connection closed by 10.0.0.1 port 59796 Jan 28 01:20:32.316675 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:32.340550 systemd-logind[1566]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:20:32.345156 systemd[1]: sshd@28-10.0.0.18:22-10.0.0.1:59796.service: Deactivated successfully. Jan 28 01:20:32.351300 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:20:32.407820 systemd-logind[1566]: Removed session 30. Jan 28 01:20:37.484227 systemd[1]: Started sshd@29-10.0.0.18:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Jan 28 01:20:37.897805 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:37.993904 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:38.201192 systemd-logind[1566]: New session 31 of user core. Jan 28 01:20:38.218444 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:20:39.526457 sshd[4956]: Connection closed by 10.0.0.1 port 36180 Jan 28 01:20:39.539530 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:39.582901 systemd[1]: sshd@29-10.0.0.18:22-10.0.0.1:36180.service: Deactivated successfully. Jan 28 01:20:39.608586 containerd[1594]: time="2026-01-28T01:20:39.605445024Z" level=info msg="container event discarded" container=a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616 type=CONTAINER_CREATED_EVENT Jan 28 01:20:39.609849 containerd[1594]: time="2026-01-28T01:20:39.609395250Z" level=info msg="container event discarded" container=a5c9379aa7215f9d6aa885f32c3f92c517c08e5a2d6c914aa76053dda2876616 type=CONTAINER_STARTED_EVENT Jan 28 01:20:39.611115 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:20:39.625258 systemd-logind[1566]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:20:39.654438 systemd-logind[1566]: Removed session 31. Jan 28 01:20:39.830625 containerd[1594]: time="2026-01-28T01:20:39.828438099Z" level=info msg="container event discarded" container=1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b type=CONTAINER_CREATED_EVENT Jan 28 01:20:39.834123 containerd[1594]: time="2026-01-28T01:20:39.833688710Z" level=info msg="container event discarded" container=1466b50bb71bc5946bdaaa849f9728b688bb8693cadf72e187ea51d526eeb53b type=CONTAINER_STARTED_EVENT Jan 28 01:20:39.851219 containerd[1594]: time="2026-01-28T01:20:39.850217221Z" level=info msg="container event discarded" container=3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64 type=CONTAINER_CREATED_EVENT Jan 28 01:20:39.851219 containerd[1594]: time="2026-01-28T01:20:39.850362028Z" level=info msg="container event discarded" container=3bdff992803e9885f07dd8ed9bf24d683da697f65135c187713adedcbc78af64 type=CONTAINER_STARTED_EVENT Jan 28 01:20:39.987182 containerd[1594]: time="2026-01-28T01:20:39.978680523Z" level=info msg="container event discarded" container=182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2 type=CONTAINER_CREATED_EVENT Jan 28 01:20:39.987363 kubelet[2942]: E0128 01:20:39.983793 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:40.029356 containerd[1594]: time="2026-01-28T01:20:40.029284544Z" level=info msg="container event discarded" container=82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4 type=CONTAINER_CREATED_EVENT Jan 28 01:20:40.104319 containerd[1594]: time="2026-01-28T01:20:40.101229428Z" level=info msg="container event discarded" container=6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516 type=CONTAINER_CREATED_EVENT Jan 28 01:20:41.472630 containerd[1594]: time="2026-01-28T01:20:41.472502868Z" level=info msg="container event discarded" container=182b20feb46c83eaecaf68abec0e4fc6ba9f826446dceb3a8e7a6771e9dce0a2 type=CONTAINER_STARTED_EVENT Jan 28 01:20:41.472630 containerd[1594]: time="2026-01-28T01:20:41.472591719Z" level=info msg="container event discarded" container=6fd2da706ac3bdc6d5eec6d21feb672e450d6f6043e5ff2a054a77f73b31c516 type=CONTAINER_STARTED_EVENT Jan 28 01:20:41.700436 containerd[1594]: time="2026-01-28T01:20:41.700355960Z" level=info msg="container event discarded" container=82280d1ff5cf05863c45a9df931e0889e25f3871ae213bf651917555cc8da9f4 type=CONTAINER_STARTED_EVENT Jan 28 01:20:44.604686 systemd[1]: Started sshd@30-10.0.0.18:22-10.0.0.1:33946.service - OpenSSH per-connection server daemon (10.0.0.1:33946). Jan 28 01:20:45.206719 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 33946 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:45.217149 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:45.313784 systemd-logind[1566]: New session 32 of user core. Jan 28 01:20:45.354332 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:20:46.909245 sshd[4994]: Connection closed by 10.0.0.1 port 33946 Jan 28 01:20:46.912469 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:46.938761 systemd[1]: sshd@30-10.0.0.18:22-10.0.0.1:33946.service: Deactivated successfully. Jan 28 01:20:46.944865 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:20:46.949391 systemd-logind[1566]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:20:46.968552 systemd-logind[1566]: Removed session 32. Jan 28 01:20:54.213769 systemd[1]: Started sshd@31-10.0.0.18:22-10.0.0.1:33960.service - OpenSSH per-connection server daemon (10.0.0.1:33960). Jan 28 01:20:55.644191 kubelet[2942]: E0128 01:20:55.640614 2942 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.234s" Jan 28 01:20:57.831454 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 33960 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:20:57.848475 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:20:57.947280 systemd-logind[1566]: New session 33 of user core. Jan 28 01:20:58.012681 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:20:59.078866 sshd[5059]: Connection closed by 10.0.0.1 port 33960 Jan 28 01:20:59.081307 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:59.156379 systemd[1]: sshd@31-10.0.0.18:22-10.0.0.1:33960.service: Deactivated successfully. Jan 28 01:20:59.180300 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:20:59.222734 systemd-logind[1566]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:20:59.246151 systemd-logind[1566]: Removed session 33. Jan 28 01:21:01.339831 kubelet[2942]: E0128 01:21:01.330558 2942 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.289s" Jan 28 01:21:04.218230 systemd[1]: Started sshd@32-10.0.0.18:22-10.0.0.1:43294.service - OpenSSH per-connection server daemon (10.0.0.1:43294). Jan 28 01:21:04.763232 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 43294 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:21:04.776165 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:04.849394 systemd-logind[1566]: New session 34 of user core. Jan 28 01:21:04.894192 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:21:06.034680 sshd[5098]: Connection closed by 10.0.0.1 port 43294 Jan 28 01:21:06.032292 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:06.081205 systemd[1]: sshd@32-10.0.0.18:22-10.0.0.1:43294.service: Deactivated successfully. Jan 28 01:21:06.100644 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:21:06.121339 systemd-logind[1566]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:21:06.130899 systemd-logind[1566]: Removed session 34. Jan 28 01:21:11.271347 systemd[1]: Started sshd@33-10.0.0.18:22-10.0.0.1:43302.service - OpenSSH per-connection server daemon (10.0.0.1:43302). Jan 28 01:21:11.919083 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 43302 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:21:11.932148 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:12.024407 systemd-logind[1566]: New session 35 of user core. Jan 28 01:21:12.079177 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:21:13.100297 sshd[5139]: Connection closed by 10.0.0.1 port 43302 Jan 28 01:21:13.108268 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:13.193739 systemd[1]: sshd@33-10.0.0.18:22-10.0.0.1:43302.service: Deactivated successfully. Jan 28 01:21:13.250345 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:21:13.290677 systemd-logind[1566]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:21:13.299265 systemd-logind[1566]: Removed session 35. Jan 28 01:21:18.398444 systemd[1]: Started sshd@34-10.0.0.18:22-10.0.0.1:50256.service - OpenSSH per-connection server daemon (10.0.0.1:50256). Jan 28 01:21:21.637289 kubelet[2942]: E0128 01:21:21.634252 2942 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.047s" Jan 28 01:21:21.784400 kubelet[2942]: E0128 01:21:21.784271 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:21.967326 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 50256 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:21:22.007859 kubelet[2942]: E0128 01:21:22.007243 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:22.488475 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:23.398845 systemd-logind[1566]: New session 36 of user core. Jan 28 01:21:23.449419 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:21:27.211280 sshd[5202]: Connection closed by 10.0.0.1 port 50256 Jan 28 01:21:27.220467 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:27.299242 systemd[1]: sshd@34-10.0.0.18:22-10.0.0.1:50256.service: Deactivated successfully. Jan 28 01:21:27.300406 systemd-logind[1566]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:21:27.327496 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:21:27.360603 systemd-logind[1566]: Removed session 36. Jan 28 01:21:29.962516 kubelet[2942]: E0128 01:21:29.961544 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:32.159584 containerd[1594]: time="2026-01-28T01:21:32.159470318Z" level=info msg="container event discarded" container=cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f type=CONTAINER_CREATED_EVENT Jan 28 01:21:32.159584 containerd[1594]: time="2026-01-28T01:21:32.159541728Z" level=info msg="container event discarded" container=cd48b5d4b5ef8d74df65e7fefb3f33b15fce11af512e5db16337a251f208528f type=CONTAINER_STARTED_EVENT Jan 28 01:21:32.292750 systemd[1]: Started sshd@35-10.0.0.18:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Jan 28 01:21:32.506656 containerd[1594]: time="2026-01-28T01:21:32.506584309Z" level=info msg="container event discarded" container=28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c type=CONTAINER_CREATED_EVENT Jan 28 01:21:32.778167 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:21:32.791575 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:32.871536 systemd-logind[1566]: New session 37 of user core. Jan 28 01:21:32.907851 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:21:32.968561 kubelet[2942]: E0128 01:21:32.966562 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:33.852306 sshd[5255]: Connection closed by 10.0.0.1 port 57122 Jan 28 01:21:33.855468 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:33.911341 systemd[1]: sshd@35-10.0.0.18:22-10.0.0.1:57122.service: Deactivated successfully. Jan 28 01:21:33.913458 systemd-logind[1566]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:21:33.944397 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:21:33.978873 systemd-logind[1566]: Removed session 37. Jan 28 01:21:34.010416 containerd[1594]: time="2026-01-28T01:21:34.010135003Z" level=info msg="container event discarded" container=28d57802c0757d78c8d6d565ea2417507cbac3ec8d13ca18dbb99a70e13f578c type=CONTAINER_STARTED_EVENT Jan 28 01:21:36.939442 containerd[1594]: time="2026-01-28T01:21:36.936637120Z" level=info msg="container event discarded" container=254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30 type=CONTAINER_CREATED_EVENT Jan 28 01:21:36.939442 containerd[1594]: time="2026-01-28T01:21:36.939208759Z" level=info msg="container event discarded" container=254e15744c98ede0c4e0648b1a9a9b558f83c2391f9d7a7453d3fdf687ec0b30 type=CONTAINER_STARTED_EVENT Jan 28 01:21:38.982182 systemd[1]: Started sshd@36-10.0.0.18:22-10.0.0.1:50520.service - OpenSSH per-connection server daemon (10.0.0.1:50520). Jan 28 01:21:39.386441 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 50520 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:21:39.395615 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:39.423869 systemd-logind[1566]: New session 38 of user core. Jan 28 01:21:39.439876 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:21:39.899067 sshd[5313]: Connection closed by 10.0.0.1 port 50520 Jan 28 01:21:39.908755 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:39.940431 systemd[1]: sshd@36-10.0.0.18:22-10.0.0.1:50520.service: Deactivated successfully. Jan 28 01:21:39.964853 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:21:39.985459 systemd-logind[1566]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:21:40.007053 systemd-logind[1566]: Removed session 38. Jan 28 01:21:40.981611 kubelet[2942]: E0128 01:21:40.977456 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:41.971570 kubelet[2942]: E0128 01:21:41.970752 2942 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"