Jan 14 00:54:53.382999 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:15:29 -00 2026 Jan 14 00:54:53.383030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:54:53.383046 kernel: BIOS-provided physical RAM map: Jan 14 00:54:53.383055 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 00:54:53.383066 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 00:54:53.383075 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 00:54:53.383085 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 00:54:53.383094 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 00:54:53.383135 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 00:54:53.383147 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 00:54:53.383161 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 00:54:53.383170 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 00:54:53.383179 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 00:54:53.383188 kernel: NX (Execute Disable) protection: active Jan 14 00:54:53.383198 kernel: APIC: Static calls initialized Jan 14 00:54:53.383211 kernel: SMBIOS 2.8 present. Jan 14 00:54:53.383249 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 00:54:53.383260 kernel: DMI: Memory slots populated: 1/1 Jan 14 00:54:53.383269 kernel: Hypervisor detected: KVM Jan 14 00:54:53.383278 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 00:54:53.383287 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 00:54:53.383296 kernel: kvm-clock: using sched offset of 32227028819 cycles Jan 14 00:54:53.383308 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 00:54:53.383320 kernel: tsc: Detected 2445.426 MHz processor Jan 14 00:54:53.383334 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 00:54:53.383344 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 00:54:53.383354 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 00:54:53.385532 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 00:54:53.385544 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 00:54:53.385556 kernel: Using GB pages for direct mapping Jan 14 00:54:53.385569 kernel: ACPI: Early table checksum verification disabled Jan 14 00:54:53.385585 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 00:54:53.385595 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385605 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385615 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385624 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 00:54:53.385664 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385677 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385693 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385707 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:53.385724 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 00:54:53.385734 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 00:54:53.385745 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 00:54:53.385759 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 00:54:53.385769 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 00:54:53.385779 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 00:54:53.385791 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 00:54:53.385801 kernel: No NUMA configuration found Jan 14 00:54:53.385840 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 00:54:53.385854 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 00:54:53.385873 kernel: Zone ranges: Jan 14 00:54:53.385885 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 00:54:53.385897 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 00:54:53.385911 kernel: Normal empty Jan 14 00:54:53.385922 kernel: Device empty Jan 14 00:54:53.385932 kernel: Movable zone start for each node Jan 14 00:54:53.385942 kernel: Early memory node ranges Jan 14 00:54:53.385957 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 00:54:53.385967 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 00:54:53.385977 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 00:54:53.385989 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 00:54:53.386001 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 00:54:53.386048 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 00:54:53.386062 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 00:54:53.386082 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 00:54:53.386093 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 00:54:53.386104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 00:54:53.386143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 00:54:53.386154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 00:54:53.388517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 00:54:53.388532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 00:54:53.388550 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 00:54:53.388561 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 00:54:53.388572 kernel: TSC deadline timer available Jan 14 00:54:53.388614 kernel: CPU topo: Max. logical packages: 1 Jan 14 00:54:53.388626 kernel: CPU topo: Max. logical dies: 1 Jan 14 00:54:53.388636 kernel: CPU topo: Max. dies per package: 1 Jan 14 00:54:53.388646 kernel: CPU topo: Max. threads per core: 1 Jan 14 00:54:53.388656 kernel: CPU topo: Num. cores per package: 4 Jan 14 00:54:53.388672 kernel: CPU topo: Num. threads per package: 4 Jan 14 00:54:53.388685 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 00:54:53.388697 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 00:54:53.388708 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 00:54:53.388718 kernel: kvm-guest: setup PV sched yield Jan 14 00:54:53.388729 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 00:54:53.388739 kernel: Booting paravirtualized kernel on KVM Jan 14 00:54:53.388754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 00:54:53.388765 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 00:54:53.388779 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 00:54:53.388790 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 00:54:53.388800 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 00:54:53.388810 kernel: kvm-guest: PV spinlocks enabled Jan 14 00:54:53.388821 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 00:54:53.388837 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:54:53.388848 kernel: random: crng init done Jan 14 00:54:53.388860 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 00:54:53.388871 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:54:53.388884 kernel: Fallback order for Node 0: 0 Jan 14 00:54:53.388895 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 00:54:53.388905 kernel: Policy zone: DMA32 Jan 14 00:54:53.388920 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:54:53.388931 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 00:54:53.388941 kernel: ftrace: allocating 40097 entries in 157 pages Jan 14 00:54:53.388951 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 00:54:53.388965 kernel: Dynamic Preempt: voluntary Jan 14 00:54:53.388975 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:54:53.388987 kernel: rcu: RCU event tracing is enabled. Jan 14 00:54:53.389002 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 00:54:53.389013 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:54:53.389056 kernel: Rude variant of Tasks RCU enabled. Jan 14 00:54:53.389068 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:54:53.389078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:54:53.389089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 00:54:53.389099 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:53.389114 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:53.389129 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:53.389142 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 00:54:53.389153 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:54:53.389174 kernel: Console: colour VGA+ 80x25 Jan 14 00:54:53.389188 kernel: printk: legacy console [ttyS0] enabled Jan 14 00:54:53.389198 kernel: ACPI: Core revision 20240827 Jan 14 00:54:53.389210 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 00:54:53.389223 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 00:54:53.389238 kernel: x2apic enabled Jan 14 00:54:53.389249 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 00:54:53.389291 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 00:54:53.389326 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 00:54:53.389341 kernel: kvm-guest: setup PV IPIs Jan 14 00:54:53.389352 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 00:54:53.389402 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 00:54:53.389414 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 00:54:53.389424 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 00:54:53.389491 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 00:54:53.389503 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 00:54:53.389519 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 00:54:53.389530 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 00:54:53.389540 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 00:54:53.389552 kernel: Speculative Store Bypass: Vulnerable Jan 14 00:54:53.389565 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 00:54:53.389577 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 00:54:53.389587 kernel: active return thunk: srso_alias_return_thunk Jan 14 00:54:53.389602 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 00:54:53.389613 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 00:54:53.389652 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 00:54:53.389664 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 00:54:53.389674 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 00:54:53.389685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 00:54:53.389696 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 00:54:53.389713 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 00:54:53.389727 kernel: Freeing SMP alternatives memory: 32K Jan 14 00:54:53.389738 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:54:53.389749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:54:53.389759 kernel: landlock: Up and running. Jan 14 00:54:53.389770 kernel: SELinux: Initializing. Jan 14 00:54:53.389781 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:54:53.389798 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:54:53.389857 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 00:54:53.389869 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 00:54:53.389880 kernel: signal: max sigframe size: 1776 Jan 14 00:54:53.389892 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:54:53.389905 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:54:53.389916 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:54:53.389932 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 00:54:53.389942 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:54:53.389953 kernel: smpboot: x86: Booting SMP configuration: Jan 14 00:54:53.389963 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 00:54:53.389976 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 00:54:53.389988 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 00:54:53.389999 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 120520K reserved, 0K cma-reserved) Jan 14 00:54:53.390015 kernel: devtmpfs: initialized Jan 14 00:54:53.390025 kernel: x86/mm: Memory block size: 128MB Jan 14 00:54:53.390036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:54:53.390047 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 00:54:53.390061 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:54:53.390071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:54:53.390082 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:54:53.390097 kernel: audit: type=2000 audit(1768352074.550:1): state=initialized audit_enabled=0 res=1 Jan 14 00:54:53.390108 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:54:53.390118 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 00:54:53.390132 kernel: cpuidle: using governor menu Jan 14 00:54:53.390143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:54:53.390180 kernel: dca service started, version 1.12.1 Jan 14 00:54:53.390192 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 00:54:53.390210 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 00:54:53.390221 kernel: PCI: Using configuration type 1 for base access Jan 14 00:54:53.390232 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 00:54:53.390243 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:54:53.390254 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:54:53.390265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:54:53.390277 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:54:53.390294 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:54:53.390304 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:54:53.390315 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:54:53.390326 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 00:54:53.390336 kernel: ACPI: Interpreter enabled Jan 14 00:54:53.390348 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 00:54:53.390396 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 00:54:53.390408 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 00:54:53.390423 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 00:54:53.390488 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 00:54:53.390501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 00:54:53.391533 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 00:54:53.391828 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 00:54:53.392125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 00:54:53.392148 kernel: PCI host bridge to bus 0000:00 Jan 14 00:54:53.392529 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 00:54:53.392793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 00:54:53.393049 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 00:54:53.393303 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 00:54:53.396247 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 00:54:53.396610 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 00:54:53.399081 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 00:54:53.399535 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 00:54:53.399837 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 00:54:53.400165 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 00:54:53.402621 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 00:54:53.402910 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 00:54:53.403185 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 00:54:53.403585 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 00:54:53.403869 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 00:54:53.404305 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 00:54:53.404685 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 00:54:53.405293 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 00:54:53.405745 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 00:54:53.406033 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 00:54:53.406320 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 00:54:53.406790 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 00:54:53.407068 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 00:54:53.407343 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 00:54:53.407787 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 00:54:53.408067 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 00:54:53.408352 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 00:54:53.408918 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 00:54:53.409223 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 00:54:53.409686 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 00:54:53.409971 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 00:54:53.410262 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 00:54:53.410730 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 00:54:53.410749 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 00:54:53.410762 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 00:54:53.410777 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 00:54:53.410789 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 00:54:53.410799 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 00:54:53.410810 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 00:54:53.410827 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 00:54:53.410838 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 00:54:53.410849 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 00:54:53.410862 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 00:54:53.410875 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 00:54:53.410888 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 00:54:53.410900 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 00:54:53.410918 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 00:54:53.410931 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 00:54:53.410944 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 00:54:53.410956 kernel: iommu: Default domain type: Translated Jan 14 00:54:53.410969 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 00:54:53.410981 kernel: PCI: Using ACPI for IRQ routing Jan 14 00:54:53.410995 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 00:54:53.411011 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 00:54:53.411024 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 00:54:53.411303 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 00:54:53.411749 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 00:54:53.412035 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 00:54:53.412054 kernel: vgaarb: loaded Jan 14 00:54:53.412068 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 00:54:53.412088 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 00:54:53.412101 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 00:54:53.412114 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:54:53.412127 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:54:53.412139 kernel: pnp: PnP ACPI init Jan 14 00:54:53.412653 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 00:54:53.412684 kernel: pnp: PnP ACPI: found 6 devices Jan 14 00:54:53.412697 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 00:54:53.412708 kernel: NET: Registered PF_INET protocol family Jan 14 00:54:53.412719 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 00:54:53.412730 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 00:54:53.412741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:54:53.412752 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:54:53.412772 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 00:54:53.412783 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 00:54:53.412794 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:54:53.412805 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:54:53.412816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:54:53.412827 kernel: NET: Registered PF_XDP protocol family Jan 14 00:54:53.413094 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 00:54:53.413408 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 00:54:53.413816 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 00:54:53.414076 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 00:54:53.414332 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 00:54:53.414754 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 00:54:53.414773 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:54:53.414789 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 00:54:53.414807 kernel: Initialise system trusted keyrings Jan 14 00:54:53.414819 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 00:54:53.414834 kernel: Key type asymmetric registered Jan 14 00:54:53.414845 kernel: Asymmetric key parser 'x509' registered Jan 14 00:54:53.414857 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 00:54:53.414872 kernel: io scheduler mq-deadline registered Jan 14 00:54:53.414884 kernel: io scheduler kyber registered Jan 14 00:54:53.414899 kernel: io scheduler bfq registered Jan 14 00:54:53.414910 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 00:54:53.414922 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 00:54:53.414933 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 00:54:53.414945 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 00:54:53.414960 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:54:53.414972 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 00:54:53.414988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 00:54:53.414999 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 00:54:53.415010 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 00:54:53.415295 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 00:54:53.415314 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 00:54:53.415809 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 00:54:53.416088 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T00:54:44 UTC (1768352084) Jan 14 00:54:53.416352 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 00:54:53.416412 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 00:54:53.416426 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:54:53.416558 kernel: Segment Routing with IPv6 Jan 14 00:54:53.416571 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:54:53.416582 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:54:53.416598 kernel: Key type dns_resolver registered Jan 14 00:54:53.416613 kernel: IPI shorthand broadcast: enabled Jan 14 00:54:53.416625 kernel: sched_clock: Marking stable (6803092869, 2836010915)->(11094263270, -1455159486) Jan 14 00:54:53.416636 kernel: registered taskstats version 1 Jan 14 00:54:53.416647 kernel: Loading compiled-in X.509 certificates Jan 14 00:54:53.416658 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 58a78462583b088d099087e6f2d97e37d80e06bb' Jan 14 00:54:53.416670 kernel: Demotion targets for Node 0: null Jan 14 00:54:53.416681 kernel: Key type .fscrypt registered Jan 14 00:54:53.416699 kernel: Key type fscrypt-provisioning registered Jan 14 00:54:53.416714 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:54:53.416725 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:54:53.416736 kernel: ima: No architecture policies found Jan 14 00:54:53.416746 kernel: clk: Disabling unused clocks Jan 14 00:54:53.416757 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 00:54:53.416773 kernel: Write protecting the kernel read-only data: 47104k Jan 14 00:54:53.416786 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 14 00:54:53.416800 kernel: Run /init as init process Jan 14 00:54:53.416811 kernel: with arguments: Jan 14 00:54:53.416822 kernel: /init Jan 14 00:54:53.416833 kernel: with environment: Jan 14 00:54:53.416844 kernel: HOME=/ Jan 14 00:54:53.416855 kernel: TERM=linux Jan 14 00:54:53.416870 kernel: SCSI subsystem initialized Jan 14 00:54:53.416885 kernel: libata version 3.00 loaded. Jan 14 00:54:53.417166 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 00:54:53.417186 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 00:54:53.417640 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 00:54:53.417921 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 00:54:53.418255 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 00:54:53.418783 kernel: scsi host0: ahci Jan 14 00:54:53.419087 kernel: scsi host1: ahci Jan 14 00:54:53.419426 kernel: scsi host2: ahci Jan 14 00:54:53.419844 kernel: scsi host3: ahci Jan 14 00:54:53.420228 kernel: scsi host4: ahci Jan 14 00:54:53.420730 kernel: scsi host5: ahci Jan 14 00:54:53.420751 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 00:54:53.420767 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 00:54:53.420781 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 00:54:53.420793 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 00:54:53.420804 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 00:54:53.420822 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 00:54:53.420834 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:53.420845 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:53.420858 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:53.420871 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:53.420884 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:53.420898 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 00:54:53.420917 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 00:54:53.420928 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 00:54:53.420939 kernel: ata3.00: applying bridge limits Jan 14 00:54:53.420951 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 00:54:53.420962 kernel: ata3.00: configured for UDMA/100 Jan 14 00:54:53.421324 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 00:54:53.421936 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 00:54:53.422231 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 00:54:53.422707 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 00:54:53.422728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 00:54:53.422740 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 00:54:53.422751 kernel: GPT:16515071 != 27000831 Jan 14 00:54:53.422773 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 00:54:53.422784 kernel: GPT:16515071 != 27000831 Jan 14 00:54:53.422795 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 00:54:53.422806 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 00:54:53.423112 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 00:54:53.423131 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:54:53.423146 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:54:53.423164 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:54:53.423175 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 00:54:53.423186 kernel: raid6: avx2x4 gen() 12347 MB/s Jan 14 00:54:53.423197 kernel: raid6: avx2x2 gen() 12016 MB/s Jan 14 00:54:53.423208 kernel: raid6: avx2x1 gen() 10557 MB/s Jan 14 00:54:53.423220 kernel: raid6: using algorithm avx2x4 gen() 12347 MB/s Jan 14 00:54:53.423234 kernel: raid6: .... xor() 5220 MB/s, rmw enabled Jan 14 00:54:53.423246 kernel: raid6: using avx2x2 recovery algorithm Jan 14 00:54:53.423261 kernel: xor: automatically using best checksumming function avx Jan 14 00:54:53.423272 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:54:53.423284 kernel: BTRFS: device fsid 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac devid 1 transid 33 /dev/mapper/usr (253:0) scanned by mount (182) Jan 14 00:54:53.423299 kernel: BTRFS info (device dm-0): first mount of filesystem 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac Jan 14 00:54:53.423315 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:53.423328 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:54:53.423340 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:54:53.423352 kernel: loop: module loaded Jan 14 00:54:53.423408 kernel: loop0: detected capacity change from 0 to 100552 Jan 14 00:54:53.423420 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:54:53.423531 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:54:53.423556 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:54:53.423571 systemd[1]: Detected virtualization kvm. Jan 14 00:54:53.423583 systemd[1]: Detected architecture x86-64. Jan 14 00:54:53.423596 systemd[1]: Running in initrd. Jan 14 00:54:53.423609 systemd[1]: No hostname configured, using default hostname. Jan 14 00:54:53.423623 systemd[1]: Hostname set to . Jan 14 00:54:53.423639 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:54:53.423654 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:54:53.423671 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:54:53.423683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:54:53.423695 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:54:53.423709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:54:53.423728 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:54:53.423741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:54:53.423757 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:54:53.423770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:54:53.423782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:54:53.423794 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:54:53.423810 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:54:53.423822 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:54:53.423838 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:54:53.423853 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:54:53.423865 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:54:53.423877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:54:53.423890 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:54:53.423907 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:54:53.423921 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:54:53.423936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:54:53.423948 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:54:53.423960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:54:53.423973 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:54:53.423989 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:54:53.424003 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:54:53.424018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:54:53.424034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:54:53.424047 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:54:53.424059 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:54:53.424071 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:54:53.424088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:54:53.424101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:53.424117 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:54:53.424132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:54:53.424149 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:54:53.424161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:54:53.424174 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:54:53.424187 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:54:53.424247 systemd-journald[321]: Collecting audit messages is enabled. Jan 14 00:54:53.424279 systemd-journald[321]: Journal started Jan 14 00:54:53.424309 systemd-journald[321]: Runtime Journal (/run/log/journal/8bfba81026364833b40ec19f7cd05cfb) is 6M, max 48.2M, 42.1M free. Jan 14 00:54:53.518046 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:54:53.523965 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 14 00:54:53.746525 kernel: Bridge firewalling registered Jan 14 00:54:53.746565 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:54:53.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.769751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:54:53.784724 kernel: audit: type=1130 audit(1768352093.746:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.824427 kernel: audit: type=1130 audit(1768352093.813:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.825722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:53.860816 kernel: audit: type=1130 audit(1768352093.833:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.861249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:54:53.889027 kernel: audit: type=1130 audit(1768352093.861:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.901788 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:54:53.916876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:54:53.942810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:54:53.972084 systemd-tmpfiles[341]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:54:53.993527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:54:54.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.019032 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:54:54.046084 kernel: audit: type=1130 audit(1768352094.004:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.046140 kernel: audit: type=1130 audit(1768352094.018:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.046191 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:54:54.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.064644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:54:54.088119 kernel: audit: type=1130 audit(1768352094.052:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.088228 kernel: audit: type=1334 audit(1768352094.077:9): prog-id=6 op=LOAD Jan 14 00:54:54.077000 audit: BPF prog-id=6 op=LOAD Jan 14 00:54:54.094757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:54:54.143396 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:54:54.226098 systemd-resolved[358]: Positive Trust Anchors: Jan 14 00:54:54.226150 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:54:54.226159 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:54:54.226200 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:54:54.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.276226 systemd-resolved[358]: Defaulting to hostname 'linux'. Jan 14 00:54:54.304675 kernel: audit: type=1130 audit(1768352094.287:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.283207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:54:54.288346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:54:54.459408 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:54:54.494614 kernel: iscsi: registered transport (tcp) Jan 14 00:54:54.548225 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:54:54.548351 kernel: QLogic iSCSI HBA Driver Jan 14 00:54:54.654503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:54:54.710190 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:54:54.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.715186 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:54:54.747934 kernel: audit: type=1130 audit(1768352094.709:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.882579 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:54:54.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.893069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:54:54.905872 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:54:54.995059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:54:55.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.016000 audit: BPF prog-id=7 op=LOAD Jan 14 00:54:55.016000 audit: BPF prog-id=8 op=LOAD Jan 14 00:54:55.022962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:54:55.103431 systemd-udevd[607]: Using default interface naming scheme 'v257'. Jan 14 00:54:55.137596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:54:55.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.157853 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:54:55.219790 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:54:55.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.232000 audit: BPF prog-id=9 op=LOAD Jan 14 00:54:55.238891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:54:55.255934 dracut-pre-trigger[671]: rd.md=0: removing MD RAID activation Jan 14 00:54:55.336141 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:54:55.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.358632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:54:55.391815 systemd-networkd[704]: lo: Link UP Jan 14 00:54:55.391857 systemd-networkd[704]: lo: Gained carrier Jan 14 00:54:55.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.394819 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:54:55.399983 systemd[1]: Reached target network.target - Network. Jan 14 00:54:55.567593 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:54:55.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.581884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:54:55.717431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 00:54:55.745218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 00:54:55.774272 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 00:54:55.801243 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 00:54:55.809708 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 00:54:55.822871 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:54:55.825806 systemd-networkd[704]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:55.825814 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:54:55.830748 systemd-networkd[704]: eth0: Link UP Jan 14 00:54:55.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.832530 systemd-networkd[704]: eth0: Gained carrier Jan 14 00:54:55.832547 systemd-networkd[704]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:55.857599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:54:55.857771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:55.861858 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:55.889425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:55.914613 systemd-networkd[704]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 00:54:55.938684 kernel: AES CTR mode by8 optimization enabled Jan 14 00:54:55.948300 disk-uuid[778]: Primary Header is updated. Jan 14 00:54:55.948300 disk-uuid[778]: Secondary Entries is updated. Jan 14 00:54:55.948300 disk-uuid[778]: Secondary Header is updated. Jan 14 00:54:55.997535 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 00:54:56.141158 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:54:56.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.237135 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:54:56.259145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:54:56.264699 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:54:56.279099 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:54:56.290871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:56.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.353591 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:54:56.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.059102 disk-uuid[796]: Warning: The kernel is still using the old partition table. Jan 14 00:54:57.059102 disk-uuid[796]: The new table will be used at the next reboot or after you Jan 14 00:54:57.059102 disk-uuid[796]: run partprobe(8) or kpartx(8) Jan 14 00:54:57.059102 disk-uuid[796]: The operation has completed successfully. Jan 14 00:54:57.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.077776 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:54:57.077974 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:54:57.087272 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:54:57.192885 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Jan 14 00:54:57.204415 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:57.204534 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:57.226354 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:57.226531 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:57.254709 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:57.276947 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:54:57.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.284760 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:54:57.533097 ignition[887]: Ignition 2.24.0 Jan 14 00:54:57.533147 ignition[887]: Stage: fetch-offline Jan 14 00:54:57.533206 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:57.533222 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:57.533362 ignition[887]: parsed url from cmdline: "" Jan 14 00:54:57.533368 ignition[887]: no config URL provided Jan 14 00:54:57.533423 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:54:57.533531 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:54:57.533598 ignition[887]: op(1): [started] loading QEMU firmware config module Jan 14 00:54:57.533607 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 00:54:57.560657 ignition[887]: op(1): [finished] loading QEMU firmware config module Jan 14 00:54:57.560703 ignition[887]: QEMU firmware config was not found. Ignoring... Jan 14 00:54:57.665905 systemd-networkd[704]: eth0: Gained IPv6LL Jan 14 00:54:57.673347 ignition[887]: parsing config with SHA512: c61ba3cbf1b17f96545b878766f4455922924ed805d03c52a65df29044751d7c896d557b66f3e606aa5ab60fe152d496a8d7361147125c390b027c23218e7cde Jan 14 00:54:57.700138 unknown[887]: fetched base config from "system" Jan 14 00:54:57.700531 unknown[887]: fetched user config from "qemu" Jan 14 00:54:57.700994 ignition[887]: fetch-offline: fetch-offline passed Jan 14 00:54:57.701090 ignition[887]: Ignition finished successfully Jan 14 00:54:57.724898 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:54:57.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.725534 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 00:54:57.728652 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:54:57.829067 ignition[896]: Ignition 2.24.0 Jan 14 00:54:57.830420 ignition[896]: Stage: kargs Jan 14 00:54:57.843061 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:57.843141 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:57.844957 ignition[896]: kargs: kargs passed Jan 14 00:54:57.855188 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:54:57.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.845045 ignition[896]: Ignition finished successfully Jan 14 00:54:57.864769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:54:57.926783 ignition[904]: Ignition 2.24.0 Jan 14 00:54:57.926832 ignition[904]: Stage: disks Jan 14 00:54:57.927065 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:57.927081 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:57.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.939135 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:54:57.933965 ignition[904]: disks: disks passed Jan 14 00:54:57.946174 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:54:57.934044 ignition[904]: Ignition finished successfully Jan 14 00:54:57.954706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:54:57.960519 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:54:57.970863 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:54:57.981204 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:54:57.995552 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:54:58.147105 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 00:54:58.159424 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:54:58.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:58.183908 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:54:58.634509 kernel: EXT4-fs (vda9): mounted filesystem 6efdc615-0e3c-4caf-8d0b-1f38e5c59ef0 r/w with ordered data mode. Quota mode: none. Jan 14 00:54:58.639769 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:54:58.647826 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:54:58.669674 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:54:58.680767 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:54:58.693250 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 00:54:58.693312 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:54:58.693352 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:54:58.716743 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:54:58.752930 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Jan 14 00:54:58.752963 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:58.752982 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:58.746188 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:54:58.782713 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:58.782792 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:58.786141 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:54:59.237866 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:54:59.266587 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 14 00:54:59.266634 kernel: audit: type=1130 audit(1768352099.245:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:59.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:59.250289 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:54:59.305847 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:54:59.334066 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:54:59.344604 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:59.395593 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:54:59.403308 ignition[1019]: INFO : Ignition 2.24.0 Jan 14 00:54:59.403308 ignition[1019]: INFO : Stage: mount Jan 14 00:54:59.403308 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:59.403308 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:59.403308 ignition[1019]: INFO : mount: mount passed Jan 14 00:54:59.403308 ignition[1019]: INFO : Ignition finished successfully Jan 14 00:54:59.422356 kernel: audit: type=1130 audit(1768352099.399:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:59.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:59.412703 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:54:59.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:59.443551 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:54:59.464516 kernel: audit: type=1130 audit(1768352099.440:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:59.642354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:54:59.707144 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1030) Jan 14 00:54:59.719877 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:59.719950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:59.743105 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:59.743189 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:59.746833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:54:59.819654 ignition[1047]: INFO : Ignition 2.24.0 Jan 14 00:54:59.819654 ignition[1047]: INFO : Stage: files Jan 14 00:54:59.827968 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:59.838042 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:59.848645 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:54:59.855996 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:54:59.855996 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:54:59.878906 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:54:59.888521 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:54:59.888521 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:54:59.880840 unknown[1047]: wrote ssh authorized keys file for user: core Jan 14 00:54:59.908378 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 00:54:59.908378 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 00:54:59.981869 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:55:00.138841 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 00:55:00.138841 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 00:55:00.165238 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 14 00:55:01.184130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:55:01.902838 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 00:55:01.914599 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:55:01.914599 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 00:55:01.938232 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 00:55:02.046714 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:55:02.061635 ignition[1047]: INFO : files: files passed Jan 14 00:55:02.061635 ignition[1047]: INFO : Ignition finished successfully Jan 14 00:55:02.155104 kernel: audit: type=1130 audit(1768352102.098:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.088041 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:55:02.105805 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:55:02.163363 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:55:02.176951 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:55:02.209896 kernel: audit: type=1130 audit(1768352102.181:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.209945 kernel: audit: type=1131 audit(1768352102.181:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.177146 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:55:02.224295 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 00:55:02.234670 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:55:02.234670 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:55:02.249828 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:55:02.271559 kernel: audit: type=1130 audit(1768352102.255:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.242943 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:55:02.257304 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:55:02.286777 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:55:02.412853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:55:02.413097 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:55:02.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.435323 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:55:02.471310 kernel: audit: type=1130 audit(1768352102.433:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.471348 kernel: audit: type=1131 audit(1768352102.433:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.467368 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:55:02.485050 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:55:02.493504 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:55:02.586669 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:55:02.608909 kernel: audit: type=1130 audit(1768352102.591:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.598753 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:55:02.648557 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:55:02.650117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:55:02.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.658728 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:55:02.667862 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:55:02.684728 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:55:02.684995 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:55:02.722369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:55:02.733241 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:55:02.745622 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:55:02.753568 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:55:02.779104 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:55:02.786950 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:55:02.806702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:55:02.813536 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:55:02.822083 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:55:02.846935 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:55:02.867081 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:55:02.872711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:55:02.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.874841 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:55:02.905603 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:55:02.918010 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:55:02.936329 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:55:02.945684 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:55:02.961716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:55:02.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:02.961973 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:55:02.972045 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:55:02.972240 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:55:02.976681 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:55:02.980658 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:55:02.983883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:55:02.992857 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:55:03.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:03.004848 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:55:03.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:03.011352 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:55:03.011645 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:55:03.024282 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:55:03.024583 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:55:03.033625 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:55:03.033757 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:55:03.046007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:55:03.049589 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:55:03.061681 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:55:03.061856 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:55:03.084171 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:55:03.143250 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:55:03.171359 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:55:03.171925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:55:03.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:03.217494 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:55:03.226256 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:55:03.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:03.232522 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:55:03.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:03.232751 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:55:07.815157 ignition[1104]: INFO : Ignition 2.24.0 Jan 14 00:55:07.905023 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4772790848 wd_nsec: 4772790558 Jan 14 00:55:07.905158 ignition[1104]: INFO : Stage: umount Jan 14 00:55:07.905158 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:55:07.905158 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:55:07.905158 ignition[1104]: INFO : umount: umount passed Jan 14 00:55:07.905158 ignition[1104]: INFO : Ignition finished successfully Jan 14 00:55:08.106603 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 14 00:55:08.106681 kernel: audit: type=1130 audit(1768352107.979:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.106701 kernel: audit: type=1131 audit(1768352107.979:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.106718 kernel: audit: type=1131 audit(1768352108.066:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:07.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:07.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:07.932355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:55:07.957758 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:55:08.179765 kernel: audit: type=1131 audit(1768352108.161:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:07.958080 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:55:08.224840 kernel: audit: type=1131 audit(1768352108.196:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:07.981414 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:55:07.981692 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:55:08.073624 systemd[1]: Stopped target network.target - Network. Jan 14 00:55:08.129809 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:55:08.130057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:55:08.162254 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:55:08.162382 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:55:08.198285 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:55:08.213259 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:55:08.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.303383 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:55:08.324600 kernel: audit: type=1131 audit(1768352108.302:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.307606 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:55:08.385569 kernel: audit: type=1131 audit(1768352108.333:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.339294 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:55:08.406668 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:55:08.417244 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:55:08.479258 kernel: audit: type=1131 audit(1768352108.443:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.419567 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:55:08.519911 kernel: audit: type=1131 audit(1768352108.489:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.461320 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:55:08.461673 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:55:08.516886 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:55:08.517151 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:55:08.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.604287 kernel: audit: type=1131 audit(1768352108.587:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.610000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:55:08.611567 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:55:08.633000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:55:08.611851 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:55:08.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.611951 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:55:08.667770 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:55:08.667909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:55:08.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.683686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:55:08.706158 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:55:08.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.706305 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:55:08.713013 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:55:08.713110 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:55:08.769194 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:55:08.769327 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:55:08.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.812123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:55:08.840000 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:55:08.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.840736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:55:08.855872 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:55:08.856000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:55:08.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.866873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:55:08.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.866944 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:55:08.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.874253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:55:08.874351 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:55:08.891500 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:55:08.891612 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:55:08.907624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:55:08.907732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:55:08.966069 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:55:08.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:09.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.985517 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:55:08.985648 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:55:08.992300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:55:09.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:09.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:08.992415 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:55:09.005168 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 00:55:09.005270 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:55:09.028787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:55:09.028943 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:55:09.044751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:55:09.044855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:55:09.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:09.095205 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:55:09.095495 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:55:09.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:09.099600 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:55:09.099771 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:55:09.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:09.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:09.146911 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:55:09.151694 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:55:09.247032 systemd[1]: Switching root. Jan 14 00:55:09.332216 systemd-journald[321]: Journal stopped Jan 14 00:55:13.824017 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 14 00:55:13.824149 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:55:13.824186 kernel: SELinux: policy capability open_perms=1 Jan 14 00:55:13.824204 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:55:13.824227 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:55:13.824249 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:55:13.824266 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:55:13.824288 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:55:13.824308 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:55:13.824328 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:55:13.824349 systemd[1]: Successfully loaded SELinux policy in 179.142ms. Jan 14 00:55:13.824378 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.183ms. Jan 14 00:55:13.824398 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:55:13.824415 systemd[1]: Detected virtualization kvm. Jan 14 00:55:13.825722 systemd[1]: Detected architecture x86-64. Jan 14 00:55:13.825760 systemd[1]: Detected first boot. Jan 14 00:55:13.825788 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:55:13.825810 zram_generator::config[1147]: No configuration found. Jan 14 00:55:13.825835 kernel: Guest personality initialized and is inactive Jan 14 00:55:13.825852 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 00:55:13.825869 kernel: Initialized host personality Jan 14 00:55:13.825884 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:55:13.825909 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:55:13.825927 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:55:13.825944 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:55:13.825961 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:55:13.825984 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:55:13.826004 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:55:13.826026 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:55:13.826051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:55:13.826073 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:55:13.826091 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:55:13.826108 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:55:13.826126 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:55:13.826145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:55:13.826162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:55:13.826185 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:55:13.826204 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:55:13.826222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:55:13.826243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:55:13.826263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 00:55:13.826280 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:55:13.826303 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:55:13.826320 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:55:13.826337 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:55:13.826356 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:55:13.826376 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:55:13.826393 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:55:13.826410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:55:13.826512 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:55:13.826564 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:55:13.826588 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:55:13.826606 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:55:13.826623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:55:13.826640 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:55:13.826659 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:55:13.826682 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:55:13.826700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:55:13.826718 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:55:13.826736 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:55:13.826755 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:55:13.826774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:55:13.826792 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:55:13.826810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:55:13.826832 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:55:13.826849 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:55:13.826867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:13.826889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:55:13.826908 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:55:13.826925 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:55:13.826948 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:55:13.826965 systemd[1]: Reached target machines.target - Containers. Jan 14 00:55:13.826985 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:55:13.827007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:55:13.827026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:55:13.827043 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:55:13.827061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:55:13.827083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:55:13.827105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:55:13.827125 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:55:13.827142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:55:13.827159 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:55:13.827177 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:55:13.827194 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:55:13.827222 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:55:13.827240 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 14 00:55:13.827258 kernel: audit: type=1131 audit(1768352113.015:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.827280 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:55:13.827299 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:55:13.827321 kernel: audit: type=1131 audit(1768352113.081:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.827338 kernel: audit: type=1334 audit(1768352113.099:99): prog-id=14 op=UNLOAD Jan 14 00:55:13.827355 kernel: audit: type=1334 audit(1768352113.099:100): prog-id=13 op=UNLOAD Jan 14 00:55:13.827370 kernel: audit: type=1334 audit(1768352113.104:101): prog-id=15 op=LOAD Jan 14 00:55:13.827524 kernel: audit: type=1334 audit(1768352113.157:102): prog-id=16 op=LOAD Jan 14 00:55:13.827561 kernel: audit: type=1334 audit(1768352113.175:103): prog-id=17 op=LOAD Jan 14 00:55:13.827579 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:55:13.827603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:55:13.827621 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:55:13.827638 kernel: ACPI: bus type drm_connector registered Jan 14 00:55:13.827658 kernel: hrtimer: interrupt took 18356470 ns Jan 14 00:55:13.827678 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:55:13.827700 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:55:13.827718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:55:13.827736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:13.827753 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:55:13.827772 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:55:13.827791 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:55:13.827842 systemd-journald[1234]: Collecting audit messages is enabled. Jan 14 00:55:13.827880 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:55:13.827899 systemd-journald[1234]: Journal started Jan 14 00:55:13.827928 systemd-journald[1234]: Runtime Journal (/run/log/journal/8bfba81026364833b40ec19f7cd05cfb) is 6M, max 48.2M, 42.1M free. Jan 14 00:55:13.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.099000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:55:13.099000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:55:13.104000 audit: BPF prog-id=15 op=LOAD Jan 14 00:55:13.157000 audit: BPF prog-id=16 op=LOAD Jan 14 00:55:13.175000 audit: BPF prog-id=17 op=LOAD Jan 14 00:55:13.819000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:55:11.687702 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:55:11.709012 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 00:55:11.710986 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:55:11.711714 systemd[1]: systemd-journald.service: Consumed 1.450s CPU time. Jan 14 00:55:13.865109 kernel: audit: type=1305 audit(1768352113.819:104): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:55:13.892056 kernel: audit: type=1300 audit(1768352113.819:104): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe70164ed0 a2=4000 a3=0 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:13.892605 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:55:13.819000 audit[1234]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe70164ed0 a2=4000 a3=0 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:13.819000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:55:13.897602 kernel: audit: type=1327 audit(1768352113.819:104): proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:55:13.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.965844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:55:13.979673 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:55:14.005057 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:55:14.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.027736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:55:14.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.065042 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:55:14.065525 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:55:14.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.079535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:55:14.080050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:55:14.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.108178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:55:14.108739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:55:14.120934 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:55:14.121300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:55:14.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.168983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:55:14.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.189807 kernel: fuse: init (API version 7.41) Jan 14 00:55:14.188881 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:55:14.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.218213 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:55:14.231291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:55:14.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.278825 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:55:14.279750 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:55:14.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.299219 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:55:14.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.334879 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:55:14.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.416138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:55:14.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.609238 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:55:14.679893 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:55:14.721111 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:55:14.769711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:55:14.775146 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:55:14.775216 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:55:14.790753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:55:14.811935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:55:14.812198 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:55:14.848621 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:55:14.861992 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:55:14.872216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:55:14.887189 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:55:14.904114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:55:14.927851 systemd-journald[1234]: Time spent on flushing to /var/log/journal/8bfba81026364833b40ec19f7cd05cfb is 122.760ms for 1115 entries. Jan 14 00:55:14.927851 systemd-journald[1234]: System Journal (/var/log/journal/8bfba81026364833b40ec19f7cd05cfb) is 8M, max 163.5M, 155.5M free. Jan 14 00:55:15.122044 systemd-journald[1234]: Received client request to flush runtime journal. Jan 14 00:55:15.122203 kernel: loop1: detected capacity change from 0 to 219144 Jan 14 00:55:14.970016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:55:15.004607 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:55:15.050135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:55:15.100210 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:55:15.122632 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:55:15.154314 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:55:15.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:15.165539 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:55:15.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:15.179934 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:55:15.200965 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:55:15.266944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:55:15.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:15.375962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:55:15.376923 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jan 14 00:55:15.376944 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jan 14 00:55:15.379216 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:55:15.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:15.450811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:55:15.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:15.477699 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:55:15.502741 kernel: loop2: detected capacity change from 0 to 111560 Jan 14 00:55:17.031672 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 00:55:17.123752 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:55:17.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:17.193000 audit: BPF prog-id=18 op=LOAD Jan 14 00:55:17.193000 audit: BPF prog-id=19 op=LOAD Jan 14 00:55:17.193000 audit: BPF prog-id=20 op=LOAD Jan 14 00:55:17.217044 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:55:17.231000 audit: BPF prog-id=21 op=LOAD Jan 14 00:55:17.298360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:55:17.480989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:55:19.494007 kernel: kauditd_printk_skb: 30 callbacks suppressed Jan 14 00:55:19.506344 kernel: audit: type=1334 audit(1768352119.465:135): prog-id=22 op=LOAD Jan 14 00:55:19.465000 audit: BPF prog-id=22 op=LOAD Jan 14 00:55:22.682623 kernel: audit: type=1334 audit(1768352122.630:136): prog-id=23 op=LOAD Jan 14 00:55:22.682887 kernel: audit: type=1334 audit(1768352122.630:137): prog-id=24 op=LOAD Jan 14 00:55:22.630000 audit: BPF prog-id=23 op=LOAD Jan 14 00:55:22.630000 audit: BPF prog-id=24 op=LOAD Jan 14 00:55:22.771176 kernel: loop4: detected capacity change from 0 to 219144 Jan 14 00:55:22.826116 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:55:22.872851 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 14 00:55:22.872902 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 14 00:55:22.898000 audit: BPF prog-id=25 op=LOAD Jan 14 00:55:22.907546 kernel: audit: type=1334 audit(1768352122.898:138): prog-id=25 op=LOAD Jan 14 00:55:22.921215 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:55:22.914000 audit: BPF prog-id=26 op=LOAD Jan 14 00:55:22.965041 kernel: audit: type=1334 audit(1768352122.914:139): prog-id=26 op=LOAD Jan 14 00:55:22.965120 kernel: audit: type=1334 audit(1768352122.914:140): prog-id=27 op=LOAD Jan 14 00:55:22.914000 audit: BPF prog-id=27 op=LOAD Jan 14 00:55:22.961810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:55:22.973732 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 00:55:22.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.997585 kernel: audit: type=1130 audit(1768352122.978:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.020593 kernel: loop6: detected capacity change from 0 to 50784 Jan 14 00:55:23.121679 systemd-nsresourced[1299]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:55:23.132814 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:55:23.169522 (sd-merge)[1297]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 00:55:23.192195 kernel: audit: type=1130 audit(1768352123.158:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.173794 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:55:23.185607 (sd-merge)[1297]: Merged extensions into '/usr'. Jan 14 00:55:23.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.222197 kernel: audit: type=1130 audit(1768352123.197:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.229333 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:55:23.230708 systemd[1]: Reloading... Jan 14 00:55:23.516787 systemd-oomd[1293]: No swap; memory pressure usage will be degraded Jan 14 00:55:23.529849 zram_generator::config[1345]: No configuration found. Jan 14 00:55:23.547963 systemd-resolved[1294]: Positive Trust Anchors: Jan 14 00:55:23.548014 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:55:23.548022 systemd-resolved[1294]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:55:23.548071 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:55:23.558995 systemd-resolved[1294]: Defaulting to hostname 'linux'. Jan 14 00:55:24.320581 systemd[1]: Reloading finished in 1052 ms. Jan 14 00:55:24.394343 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:55:24.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.400123 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:55:24.420561 kernel: audit: type=1130 audit(1768352124.399:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.421984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:55:24.428112 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:55:24.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.454788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:55:24.490924 systemd[1]: Starting ensure-sysext.service... Jan 14 00:55:24.502966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:55:24.506000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:55:24.511224 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 14 00:55:24.511294 kernel: audit: type=1334 audit(1768352124.506:148): prog-id=8 op=UNLOAD Jan 14 00:55:24.507000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:55:24.518211 kernel: audit: type=1334 audit(1768352124.507:149): prog-id=7 op=UNLOAD Jan 14 00:55:24.530000 audit: BPF prog-id=28 op=LOAD Jan 14 00:55:24.534996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:55:24.530000 audit: BPF prog-id=29 op=LOAD Jan 14 00:55:24.547556 kernel: audit: type=1334 audit(1768352124.530:150): prog-id=28 op=LOAD Jan 14 00:55:24.547631 kernel: audit: type=1334 audit(1768352124.530:151): prog-id=29 op=LOAD Jan 14 00:55:24.551000 audit: BPF prog-id=30 op=LOAD Jan 14 00:55:24.559985 kernel: audit: type=1334 audit(1768352124.551:152): prog-id=30 op=LOAD Jan 14 00:55:24.560046 kernel: audit: type=1334 audit(1768352124.553:153): prog-id=25 op=UNLOAD Jan 14 00:55:24.560069 kernel: audit: type=1334 audit(1768352124.553:154): prog-id=31 op=LOAD Jan 14 00:55:24.560114 kernel: audit: type=1334 audit(1768352124.553:155): prog-id=32 op=LOAD Jan 14 00:55:24.560178 kernel: audit: type=1334 audit(1768352124.553:156): prog-id=26 op=UNLOAD Jan 14 00:55:24.560202 kernel: audit: type=1334 audit(1768352124.553:157): prog-id=27 op=UNLOAD Jan 14 00:55:24.553000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:55:24.553000 audit: BPF prog-id=31 op=LOAD Jan 14 00:55:24.553000 audit: BPF prog-id=32 op=LOAD Jan 14 00:55:24.553000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:55:24.553000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:55:24.558000 audit: BPF prog-id=33 op=LOAD Jan 14 00:55:24.558000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:55:24.559000 audit: BPF prog-id=34 op=LOAD Jan 14 00:55:24.559000 audit: BPF prog-id=35 op=LOAD Jan 14 00:55:24.559000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:55:24.559000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:55:24.560000 audit: BPF prog-id=36 op=LOAD Jan 14 00:55:24.560000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:55:24.567000 audit: BPF prog-id=37 op=LOAD Jan 14 00:55:24.567000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:55:24.567000 audit: BPF prog-id=38 op=LOAD Jan 14 00:55:24.567000 audit: BPF prog-id=39 op=LOAD Jan 14 00:55:24.567000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:55:24.567000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:55:24.570000 audit: BPF prog-id=40 op=LOAD Jan 14 00:55:24.572000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:55:24.572000 audit: BPF prog-id=41 op=LOAD Jan 14 00:55:24.572000 audit: BPF prog-id=42 op=LOAD Jan 14 00:55:24.572000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:55:24.572000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:55:24.581604 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:55:24.581846 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:55:24.582413 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:55:24.586188 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 14 00:55:24.586326 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 14 00:55:24.587919 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:55:24.588663 systemd[1]: Reloading... Jan 14 00:55:24.662761 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:55:24.662784 systemd-tmpfiles[1380]: Skipping /boot Jan 14 00:55:24.755951 systemd-udevd[1381]: Using default interface naming scheme 'v257'. Jan 14 00:55:24.767960 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:55:24.767981 systemd-tmpfiles[1380]: Skipping /boot Jan 14 00:55:24.820546 zram_generator::config[1415]: No configuration found. Jan 14 00:55:25.119527 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:55:25.150727 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 00:55:25.160225 kernel: ACPI: button: Power Button [PWRF] Jan 14 00:55:25.191242 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 00:55:25.191855 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 00:55:25.252132 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 00:55:25.253754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 00:55:25.261922 systemd[1]: Reloading finished in 671 ms. Jan 14 00:55:25.288419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:55:25.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.304000 audit: BPF prog-id=43 op=LOAD Jan 14 00:55:25.304000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:55:25.304000 audit: BPF prog-id=44 op=LOAD Jan 14 00:55:25.304000 audit: BPF prog-id=45 op=LOAD Jan 14 00:55:25.304000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:55:25.304000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:55:25.310000 audit: BPF prog-id=46 op=LOAD Jan 14 00:55:25.311000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:55:25.311000 audit: BPF prog-id=47 op=LOAD Jan 14 00:55:25.312000 audit: BPF prog-id=48 op=LOAD Jan 14 00:55:25.312000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:55:25.312000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:55:25.313000 audit: BPF prog-id=49 op=LOAD Jan 14 00:55:25.313000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:55:25.313000 audit: BPF prog-id=50 op=LOAD Jan 14 00:55:25.313000 audit: BPF prog-id=51 op=LOAD Jan 14 00:55:25.313000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:55:25.313000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:55:25.316000 audit: BPF prog-id=52 op=LOAD Jan 14 00:55:25.316000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:55:25.316000 audit: BPF prog-id=53 op=LOAD Jan 14 00:55:25.316000 audit: BPF prog-id=54 op=LOAD Jan 14 00:55:25.316000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:55:25.316000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:55:25.319000 audit: BPF prog-id=55 op=LOAD Jan 14 00:55:25.319000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:55:25.319000 audit: BPF prog-id=56 op=LOAD Jan 14 00:55:25.319000 audit: BPF prog-id=57 op=LOAD Jan 14 00:55:25.319000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:55:25.319000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:55:25.360365 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:55:25.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.457411 systemd[1]: Finished ensure-sysext.service. Jan 14 00:55:25.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.499279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:25.506881 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:55:25.517764 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:55:25.522799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:55:25.526795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:55:25.537339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:55:25.546812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:55:25.560130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:55:25.564524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:55:25.564852 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:55:25.571036 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:55:25.582822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:55:25.591114 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:55:25.596529 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:55:25.603000 audit: BPF prog-id=58 op=LOAD Jan 14 00:55:25.606611 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:55:25.621000 audit: BPF prog-id=59 op=LOAD Jan 14 00:55:25.630958 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 00:55:25.659761 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:55:25.675110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:55:25.675256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:25.680282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:55:25.680851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:55:25.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.692673 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:55:25.693098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:55:25.693994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:55:25.694337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:55:25.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.731276 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:55:25.732270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:55:25.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.751855 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:55:26.257903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:55:26.258002 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:55:26.272000 audit[1514]: SYSTEM_BOOT pid=1514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 00:55:26.286923 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:55:26.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:26.301000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:55:26.301000 audit[1532]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc077da200 a2=420 a3=0 items=0 ppid=1493 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:26.301000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:55:26.302572 augenrules[1532]: No rules Jan 14 00:55:26.311010 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:55:26.311609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:55:26.334044 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:55:26.590870 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 00:55:26.591174 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:55:26.868368 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:55:26.870973 systemd-networkd[1508]: lo: Link UP Jan 14 00:55:26.871003 systemd-networkd[1508]: lo: Gained carrier Jan 14 00:55:26.882259 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:55:26.882300 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:55:26.885922 systemd-networkd[1508]: eth0: Link UP Jan 14 00:55:26.887682 systemd-networkd[1508]: eth0: Gained carrier Jan 14 00:55:26.887746 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:55:26.931750 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 00:55:26.943336 systemd-timesyncd[1512]: Network configuration changed, trying to establish connection. Jan 14 00:55:26.948937 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 00:55:26.949022 systemd-timesyncd[1512]: Initial clock synchronization to Wed 2026-01-14 00:55:27.125221 UTC. Jan 14 00:55:26.972527 kernel: kvm_amd: TSC scaling supported Jan 14 00:55:26.972621 kernel: kvm_amd: Nested Virtualization enabled Jan 14 00:55:26.972674 kernel: kvm_amd: Nested Paging enabled Jan 14 00:55:26.972694 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 00:55:26.972713 kernel: kvm_amd: PMU virtualization is disabled Jan 14 00:55:27.531225 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:55:27.538668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:55:27.572716 systemd[1]: Reached target network.target - Network. Jan 14 00:55:27.581656 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:55:27.589577 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:55:27.595327 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:55:27.720299 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:55:27.771938 kernel: EDAC MC: Ver: 3.0.0 Jan 14 00:55:28.140158 systemd-networkd[1508]: eth0: Gained IPv6LL Jan 14 00:55:28.179282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:55:28.188825 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:55:29.302636 ldconfig[1505]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:55:29.327530 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:55:29.342397 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:55:29.465112 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:55:29.474771 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:55:29.504678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:55:29.513087 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:55:29.521727 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 00:55:29.531250 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:55:29.540397 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:55:29.618286 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:55:29.651344 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:55:29.660168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:55:29.666929 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:55:29.667029 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:55:29.676743 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:55:29.706161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:55:29.728907 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:55:29.741084 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:55:29.750536 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:55:29.759567 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:55:29.783954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:55:29.796347 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:55:29.807378 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:55:29.817165 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:55:29.823787 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:55:29.832052 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:55:29.832101 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:55:29.835434 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:55:29.848208 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 00:55:29.862720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:55:29.884923 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:55:29.903900 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:55:29.925130 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:55:29.934524 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:55:29.935753 jq[1565]: false Jan 14 00:55:29.939767 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 00:55:29.954242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:30.107510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:55:30.377035 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:55:30.454615 extend-filesystems[1566]: Found /dev/vda6 Jan 14 00:55:30.468806 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jan 14 00:55:30.468265 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jan 14 00:55:30.469281 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:55:30.478723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:55:30.480988 extend-filesystems[1566]: Found /dev/vda9 Jan 14 00:55:30.519059 extend-filesystems[1566]: Checking size of /dev/vda9 Jan 14 00:55:30.519707 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:55:30.537702 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Jan 14 00:55:30.537822 oslogin_cache_refresh[1567]: Failure getting users, quitting Jan 14 00:55:30.538028 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 00:55:30.538078 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 00:55:30.538330 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Jan 14 00:55:30.538385 oslogin_cache_refresh[1567]: Refreshing group entry cache Jan 14 00:55:30.559379 extend-filesystems[1566]: Resized partition /dev/vda9 Jan 14 00:55:30.568397 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:55:30.575907 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Jan 14 00:55:30.575907 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 00:55:30.576103 extend-filesystems[1588]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:55:30.593099 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 00:55:30.570531 oslogin_cache_refresh[1567]: Failure getting groups, quitting Jan 14 00:55:30.581058 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:55:30.570557 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 00:55:30.609614 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:55:30.633933 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:55:30.643259 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:55:30.810194 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 00:55:30.845021 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:55:30.858833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:55:30.890777 jq[1594]: true Jan 14 00:55:30.859595 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:55:30.860190 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 00:55:30.860722 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 00:55:30.890198 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:55:30.891078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:55:30.899582 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 00:55:30.899582 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 00:55:30.899582 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 00:55:30.959718 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Jan 14 00:55:30.902196 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:55:30.918694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:55:30.970216 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:55:30.984544 update_engine[1593]: I20260114 00:55:30.983287 1593 main.cc:92] Flatcar Update Engine starting Jan 14 00:55:30.985925 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:55:30.987597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:55:31.079633 jq[1614]: true Jan 14 00:55:31.095241 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 00:55:31.097547 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 00:55:31.604630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:55:31.612321 systemd-logind[1589]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 00:55:31.612369 systemd-logind[1589]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 00:55:31.642695 systemd-logind[1589]: New seat seat0. Jan 14 00:55:31.644826 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:55:31.657742 tar[1613]: linux-amd64/LICENSE Jan 14 00:55:31.667985 tar[1613]: linux-amd64/helm Jan 14 00:55:31.701005 dbus-daemon[1563]: [system] SELinux support is enabled Jan 14 00:55:31.701689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:55:31.710428 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:55:31.710612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:55:31.755352 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:55:31.755661 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:55:31.778004 dbus-daemon[1563]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 00:55:31.781791 update_engine[1593]: I20260114 00:55:31.781601 1593 update_check_scheduler.cc:74] Next update check in 5m33s Jan 14 00:55:31.782158 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:55:31.792754 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:55:32.240605 bash[1650]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:55:32.251018 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:55:32.259702 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 00:55:32.577211 locksmithd[1649]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:55:32.912753 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:55:33.460636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:55:33.474987 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:55:33.531539 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:55:33.532130 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:55:33.563340 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:55:33.778080 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:55:33.792207 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:55:33.804562 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 00:55:33.812020 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:55:34.299976 containerd[1615]: time="2026-01-14T00:55:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:55:34.304257 containerd[1615]: time="2026-01-14T00:55:34.303310196Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:55:34.386695 containerd[1615]: time="2026-01-14T00:55:34.386194295Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="300.189µs" Jan 14 00:55:34.386695 containerd[1615]: time="2026-01-14T00:55:34.386687050Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:55:34.386963 containerd[1615]: time="2026-01-14T00:55:34.386898988Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:55:34.387008 containerd[1615]: time="2026-01-14T00:55:34.386959075Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:55:34.387398 containerd[1615]: time="2026-01-14T00:55:34.387338647Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:55:34.387555 containerd[1615]: time="2026-01-14T00:55:34.387506139Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:55:34.389918 containerd[1615]: time="2026-01-14T00:55:34.387733195Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:55:34.390018 containerd[1615]: time="2026-01-14T00:55:34.389913655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.390759 containerd[1615]: time="2026-01-14T00:55:34.390692129Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.391910 containerd[1615]: time="2026-01-14T00:55:34.390762425Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:55:34.391960 containerd[1615]: time="2026-01-14T00:55:34.391920652Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:55:34.391960 containerd[1615]: time="2026-01-14T00:55:34.391950170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.395973 containerd[1615]: time="2026-01-14T00:55:34.393943948Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.395973 containerd[1615]: time="2026-01-14T00:55:34.394000561Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:55:34.395973 containerd[1615]: time="2026-01-14T00:55:34.394422707Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.396300 containerd[1615]: time="2026-01-14T00:55:34.396250791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.396384 containerd[1615]: time="2026-01-14T00:55:34.396350869Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:55:34.396384 containerd[1615]: time="2026-01-14T00:55:34.396372000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:55:34.397669 containerd[1615]: time="2026-01-14T00:55:34.397606767Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:55:34.400969 containerd[1615]: time="2026-01-14T00:55:34.400777013Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:55:34.401028 containerd[1615]: time="2026-01-14T00:55:34.400991234Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:55:34.422938 containerd[1615]: time="2026-01-14T00:55:34.422836570Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:55:34.423575 containerd[1615]: time="2026-01-14T00:55:34.423492138Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:55:34.424322 containerd[1615]: time="2026-01-14T00:55:34.423940984Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:55:34.424322 containerd[1615]: time="2026-01-14T00:55:34.423969755Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:55:34.424322 containerd[1615]: time="2026-01-14T00:55:34.424167038Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:55:34.424561 containerd[1615]: time="2026-01-14T00:55:34.424375863Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:55:34.424561 containerd[1615]: time="2026-01-14T00:55:34.424404018Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:55:34.424561 containerd[1615]: time="2026-01-14T00:55:34.424419813Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:55:34.424561 containerd[1615]: time="2026-01-14T00:55:34.424533441Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:55:34.424692 containerd[1615]: time="2026-01-14T00:55:34.424650042Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:55:34.424692 containerd[1615]: time="2026-01-14T00:55:34.424676650Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:55:34.425436 containerd[1615]: time="2026-01-14T00:55:34.424802547Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:55:34.425436 containerd[1615]: time="2026-01-14T00:55:34.424910173Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:55:34.425436 containerd[1615]: time="2026-01-14T00:55:34.425043932Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.425918340Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426149529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426404799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426520793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426551333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426646821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426672884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426859142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.426979158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.427071958Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.427379910Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.427535912Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.428350545Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.428536833Z" level=info msg="Start snapshots syncer" Jan 14 00:55:34.430650 containerd[1615]: time="2026-01-14T00:55:34.428751367Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:55:34.433325 containerd[1615]: time="2026-01-14T00:55:34.429639529Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:55:34.433325 containerd[1615]: time="2026-01-14T00:55:34.429842462Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.430012127Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.430269702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431416489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431544518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431573673Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431693749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431785226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431804326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431903686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.431927111Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.432413702Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.432547956Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:55:34.435510 containerd[1615]: time="2026-01-14T00:55:34.432567005Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.432797669Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.432815171Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.432833201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.432875442Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.432990162Z" level=info msg="runtime interface created" Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.433004987Z" level=info msg="created NRI interface" Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.433019722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.433041177Z" level=info msg="Connect containerd service" Jan 14 00:55:34.437021 containerd[1615]: time="2026-01-14T00:55:34.433095555Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:55:34.437769 containerd[1615]: time="2026-01-14T00:55:34.437163401Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:55:35.290653 tar[1613]: linux-amd64/README.md Jan 14 00:55:35.528241 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.590301832Z" level=info msg="Start subscribing containerd event" Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.590785707Z" level=info msg="Start recovering state" Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591520319Z" level=info msg="Start event monitor" Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591568533Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591604904Z" level=info msg="Start streaming server" Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591678223Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591711919Z" level=info msg="runtime interface starting up..." Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591741548Z" level=info msg="starting plugins..." Jan 14 00:55:35.592089 containerd[1615]: time="2026-01-14T00:55:35.591817522Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:55:35.593321 containerd[1615]: time="2026-01-14T00:55:35.593278461Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:55:35.593565 containerd[1615]: time="2026-01-14T00:55:35.593540371Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:55:35.599726 containerd[1615]: time="2026-01-14T00:55:35.594780028Z" level=info msg="containerd successfully booted in 1.463262s" Jan 14 00:55:35.595089 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:55:36.890912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:36.901009 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:55:36.910828 systemd[1]: Startup finished in 13.024s (kernel) + 18.843s (initrd) + 27.257s (userspace) = 59.125s. Jan 14 00:55:36.916096 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:37.216284 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:55:37.220995 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:58432.service - OpenSSH per-connection server daemon (10.0.0.1:58432). Jan 14 00:55:37.513820 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 58432 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:37.525101 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:37.558504 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:55:37.563929 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:55:37.569166 systemd-logind[1589]: New session 1 of user core. Jan 14 00:55:37.615430 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:55:37.628932 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:55:37.678782 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:37.691822 systemd-logind[1589]: New session 2 of user core. Jan 14 00:55:38.001943 systemd[1718]: Queued start job for default target default.target. Jan 14 00:55:38.021039 systemd[1718]: Created slice app.slice - User Application Slice. Jan 14 00:55:38.021156 systemd[1718]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:55:38.021179 systemd[1718]: Reached target paths.target - Paths. Jan 14 00:55:38.021511 systemd[1718]: Reached target timers.target - Timers. Jan 14 00:55:38.025496 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:55:38.028649 systemd[1718]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:55:38.069661 systemd[1718]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:55:38.078511 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:55:38.078763 systemd[1718]: Reached target sockets.target - Sockets. Jan 14 00:55:38.079022 systemd[1718]: Reached target basic.target - Basic System. Jan 14 00:55:38.079314 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:55:38.083780 kubelet[1701]: E0114 00:55:38.083685 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:38.084149 systemd[1718]: Reached target default.target - Main User Target. Jan 14 00:55:38.084253 systemd[1718]: Startup finished in 370ms. Jan 14 00:55:38.093859 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:55:38.095221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:38.095632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:38.096200 systemd[1]: kubelet.service: Consumed 2.935s CPU time, 259M memory peak. Jan 14 00:55:38.142171 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Jan 14 00:55:38.253839 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:38.255803 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:38.275327 systemd-logind[1589]: New session 3 of user core. Jan 14 00:55:38.286884 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 00:55:38.350873 sshd[1739]: Connection closed by 10.0.0.1 port 58434 Jan 14 00:55:38.353338 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 14 00:55:38.371426 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:58434.service: Deactivated successfully. Jan 14 00:55:38.378083 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 00:55:38.380364 systemd-logind[1589]: Session 3 logged out. Waiting for processes to exit. Jan 14 00:55:38.388350 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:58442.service - OpenSSH per-connection server daemon (10.0.0.1:58442). Jan 14 00:55:38.390750 systemd-logind[1589]: Removed session 3. Jan 14 00:55:38.515779 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 58442 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:38.516102 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:38.545359 systemd-logind[1589]: New session 4 of user core. Jan 14 00:55:38.557961 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:55:38.581695 sshd[1749]: Connection closed by 10.0.0.1 port 58442 Jan 14 00:55:38.584323 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jan 14 00:55:38.599290 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:58442.service: Deactivated successfully. Jan 14 00:55:38.603339 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 00:55:38.608391 systemd-logind[1589]: Session 4 logged out. Waiting for processes to exit. Jan 14 00:55:38.617927 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:58444.service - OpenSSH per-connection server daemon (10.0.0.1:58444). Jan 14 00:55:38.620354 systemd-logind[1589]: Removed session 4. Jan 14 00:55:38.735786 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 58444 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:38.736680 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:38.758088 systemd-logind[1589]: New session 5 of user core. Jan 14 00:55:38.782691 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 00:55:38.829261 sshd[1759]: Connection closed by 10.0.0.1 port 58444 Jan 14 00:55:38.830686 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 14 00:55:38.854067 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:58444.service: Deactivated successfully. Jan 14 00:55:38.858980 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 00:55:38.867029 systemd-logind[1589]: Session 5 logged out. Waiting for processes to exit. Jan 14 00:55:38.873515 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:58446.service - OpenSSH per-connection server daemon (10.0.0.1:58446). Jan 14 00:55:38.874547 systemd-logind[1589]: Removed session 5. Jan 14 00:55:38.998686 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:39.005788 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:39.040125 systemd-logind[1589]: New session 6 of user core. Jan 14 00:55:39.051601 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 00:55:39.127955 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 00:55:39.128549 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:55:40.099255 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 00:55:40.130681 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 00:55:40.653125 dockerd[1792]: time="2026-01-14T00:55:40.652872781Z" level=info msg="Starting up" Jan 14 00:55:40.654430 dockerd[1792]: time="2026-01-14T00:55:40.654393454Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 00:55:40.681036 dockerd[1792]: time="2026-01-14T00:55:40.680914892Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 00:55:40.869547 dockerd[1792]: time="2026-01-14T00:55:40.868903596Z" level=info msg="Loading containers: start." Jan 14 00:55:41.471313 kernel: Initializing XFRM netlink socket Jan 14 00:55:44.509659 systemd-networkd[1508]: docker0: Link UP Jan 14 00:55:44.633030 dockerd[1792]: time="2026-01-14T00:55:44.632654088Z" level=info msg="Loading containers: done." Jan 14 00:55:44.747766 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3192875044-merged.mount: Deactivated successfully. Jan 14 00:55:44.749283 dockerd[1792]: time="2026-01-14T00:55:44.749194225Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 00:55:44.749396 dockerd[1792]: time="2026-01-14T00:55:44.749296458Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 00:55:44.749520 dockerd[1792]: time="2026-01-14T00:55:44.749487671Z" level=info msg="Initializing buildkit" Jan 14 00:55:44.835865 dockerd[1792]: time="2026-01-14T00:55:44.835725024Z" level=info msg="Completed buildkit initialization" Jan 14 00:55:44.847099 dockerd[1792]: time="2026-01-14T00:55:44.846849281Z" level=info msg="Daemon has completed initialization" Jan 14 00:55:44.847099 dockerd[1792]: time="2026-01-14T00:55:44.847257449Z" level=info msg="API listen on /run/docker.sock" Jan 14 00:55:44.848607 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 00:55:47.821358 containerd[1615]: time="2026-01-14T00:55:47.820957183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 14 00:55:48.280253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:55:48.285020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:49.608408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314130978.mount: Deactivated successfully. Jan 14 00:55:50.126652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:50.141519 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:50.252831 kubelet[2024]: E0114 00:55:50.252667 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:50.260423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:50.262181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:50.263786 systemd[1]: kubelet.service: Consumed 1.368s CPU time, 110.5M memory peak. Jan 14 00:55:51.646899 containerd[1615]: time="2026-01-14T00:55:51.646785399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:51.649989 containerd[1615]: time="2026-01-14T00:55:51.649870992Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26225543" Jan 14 00:55:51.651794 containerd[1615]: time="2026-01-14T00:55:51.651709814Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:51.655659 containerd[1615]: time="2026-01-14T00:55:51.655555786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:51.656820 containerd[1615]: time="2026-01-14T00:55:51.656719395Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.835316281s" Jan 14 00:55:51.656886 containerd[1615]: time="2026-01-14T00:55:51.656843116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 14 00:55:51.659720 containerd[1615]: time="2026-01-14T00:55:51.659664430Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 14 00:55:54.386157 containerd[1615]: time="2026-01-14T00:55:54.385902073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:54.387911 containerd[1615]: time="2026-01-14T00:55:54.387852152Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 14 00:55:54.389431 containerd[1615]: time="2026-01-14T00:55:54.389313497Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:54.395324 containerd[1615]: time="2026-01-14T00:55:54.395152059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:54.396256 containerd[1615]: time="2026-01-14T00:55:54.396195482Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.736489872s" Jan 14 00:55:54.396256 containerd[1615]: time="2026-01-14T00:55:54.396236394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 14 00:55:54.400770 containerd[1615]: time="2026-01-14T00:55:54.400681738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 14 00:55:55.597587 containerd[1615]: time="2026-01-14T00:55:55.597063206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:55.601628 containerd[1615]: time="2026-01-14T00:55:55.598385437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 14 00:55:55.603841 containerd[1615]: time="2026-01-14T00:55:55.603687886Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:55.632553 containerd[1615]: time="2026-01-14T00:55:55.632106065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:55.633735 containerd[1615]: time="2026-01-14T00:55:55.633401085Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.232655981s" Jan 14 00:55:55.633735 containerd[1615]: time="2026-01-14T00:55:55.633529832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 14 00:55:55.635726 containerd[1615]: time="2026-01-14T00:55:55.635644214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 14 00:55:56.740641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443712627.mount: Deactivated successfully. Jan 14 00:55:57.195510 containerd[1615]: time="2026-01-14T00:55:57.195324004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:57.196728 containerd[1615]: time="2026-01-14T00:55:57.196622559Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 14 00:55:57.198402 containerd[1615]: time="2026-01-14T00:55:57.198332940Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:57.201631 containerd[1615]: time="2026-01-14T00:55:57.201535245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:55:57.202486 containerd[1615]: time="2026-01-14T00:55:57.202354331Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.566644792s" Jan 14 00:55:57.202486 containerd[1615]: time="2026-01-14T00:55:57.202415139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 14 00:55:57.204866 containerd[1615]: time="2026-01-14T00:55:57.204769761Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 14 00:55:58.070116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361645519.mount: Deactivated successfully. Jan 14 00:56:00.297743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:56:00.445173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:01.531789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:01.843966 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:01.967869 containerd[1615]: time="2026-01-14T00:56:01.967717183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:01.969627 containerd[1615]: time="2026-01-14T00:56:01.969562426Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568586" Jan 14 00:56:02.002922 containerd[1615]: time="2026-01-14T00:56:02.000496052Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:02.156531 containerd[1615]: time="2026-01-14T00:56:02.155820156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:02.158865 containerd[1615]: time="2026-01-14T00:56:02.158736630Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.953849727s" Jan 14 00:56:02.159001 containerd[1615]: time="2026-01-14T00:56:02.158860930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 14 00:56:02.162344 containerd[1615]: time="2026-01-14T00:56:02.162307539Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 14 00:56:02.241191 kubelet[2161]: E0114 00:56:02.240970 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:02.247839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:02.248237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:02.249057 systemd[1]: kubelet.service: Consumed 1.654s CPU time, 110.6M memory peak. Jan 14 00:56:03.008201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254383394.mount: Deactivated successfully. Jan 14 00:56:03.076968 containerd[1615]: time="2026-01-14T00:56:03.076031632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:03.096295 containerd[1615]: time="2026-01-14T00:56:03.095168418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 14 00:56:03.103665 containerd[1615]: time="2026-01-14T00:56:03.103192903Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:03.128797 containerd[1615]: time="2026-01-14T00:56:03.128373105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:03.133049 containerd[1615]: time="2026-01-14T00:56:03.130147512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 967.577924ms" Jan 14 00:56:03.136743 containerd[1615]: time="2026-01-14T00:56:03.133704526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 14 00:56:03.143368 containerd[1615]: time="2026-01-14T00:56:03.142135172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 14 00:56:04.421271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359717613.mount: Deactivated successfully. Jan 14 00:56:12.352088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 00:56:12.362042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:13.532107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:13.563392 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:14.795496 kubelet[2237]: E0114 00:56:14.795335 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:14.801050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:14.801370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:14.803240 systemd[1]: kubelet.service: Consumed 1.910s CPU time, 110.4M memory peak. Jan 14 00:56:14.823514 containerd[1615]: time="2026-01-14T00:56:14.823344141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:14.827083 containerd[1615]: time="2026-01-14T00:56:14.826882798Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 14 00:56:14.828900 containerd[1615]: time="2026-01-14T00:56:14.828781160Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:14.834149 containerd[1615]: time="2026-01-14T00:56:14.833845692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:14.836422 containerd[1615]: time="2026-01-14T00:56:14.835898802Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 11.693650294s" Jan 14 00:56:14.836422 containerd[1615]: time="2026-01-14T00:56:14.835955786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 14 00:56:16.826683 update_engine[1593]: I20260114 00:56:16.824258 1593 update_attempter.cc:509] Updating boot flags... Jan 14 00:56:24.892712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 00:56:24.900578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:24.936646 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:56:24.936826 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:56:24.937840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:24.943290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:25.007822 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-6.scope)... Jan 14 00:56:25.007872 systemd[1]: Reloading... Jan 14 00:56:25.267052 zram_generator::config[2341]: No configuration found. Jan 14 00:56:25.875163 systemd[1]: Reloading finished in 866 ms. Jan 14 00:56:26.024696 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:56:26.025288 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:56:26.025898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:26.025971 systemd[1]: kubelet.service: Consumed 219ms CPU time, 98.4M memory peak. Jan 14 00:56:26.041882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:26.438581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:26.467314 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:56:26.573583 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:56:26.573583 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:56:26.574105 kubelet[2388]: I0114 00:56:26.573657 2388 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:56:27.445772 kubelet[2388]: I0114 00:56:27.445694 2388 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 00:56:27.445772 kubelet[2388]: I0114 00:56:27.445734 2388 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:56:27.445772 kubelet[2388]: I0114 00:56:27.445782 2388 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 00:56:27.445988 kubelet[2388]: I0114 00:56:27.445794 2388 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:56:27.446871 kubelet[2388]: I0114 00:56:27.446033 2388 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:56:27.530640 kubelet[2388]: E0114 00:56:27.530549 2388 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:56:27.531761 kubelet[2388]: I0114 00:56:27.531694 2388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:56:27.544092 kubelet[2388]: I0114 00:56:27.543863 2388 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:56:27.560806 kubelet[2388]: I0114 00:56:27.559635 2388 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 00:56:27.560806 kubelet[2388]: I0114 00:56:27.560076 2388 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:56:27.560806 kubelet[2388]: I0114 00:56:27.560099 2388 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:56:27.560806 kubelet[2388]: I0114 00:56:27.560272 2388 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:56:27.561194 kubelet[2388]: I0114 00:56:27.560281 2388 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 00:56:27.561194 kubelet[2388]: I0114 00:56:27.560395 2388 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 00:56:27.566160 kubelet[2388]: I0114 00:56:27.566073 2388 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:56:27.566630 kubelet[2388]: I0114 00:56:27.566424 2388 kubelet.go:475] "Attempting to sync node with API server" Jan 14 00:56:27.566630 kubelet[2388]: I0114 00:56:27.566566 2388 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:56:27.566630 kubelet[2388]: I0114 00:56:27.566605 2388 kubelet.go:387] "Adding apiserver pod source" Jan 14 00:56:27.566779 kubelet[2388]: I0114 00:56:27.566667 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:56:27.570864 kubelet[2388]: E0114 00:56:27.570779 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:56:27.571526 kubelet[2388]: E0114 00:56:27.571352 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:56:27.575044 kubelet[2388]: I0114 00:56:27.574981 2388 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:56:27.575651 kubelet[2388]: I0114 00:56:27.575625 2388 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:56:27.575696 kubelet[2388]: I0114 00:56:27.575669 2388 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 00:56:27.575785 kubelet[2388]: W0114 00:56:27.575737 2388 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 00:56:27.581258 kubelet[2388]: I0114 00:56:27.581177 2388 server.go:1262] "Started kubelet" Jan 14 00:56:27.581669 kubelet[2388]: I0114 00:56:27.581631 2388 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:56:27.583480 kubelet[2388]: I0114 00:56:27.582081 2388 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 00:56:27.585862 kubelet[2388]: I0114 00:56:27.585842 2388 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:56:27.586083 kubelet[2388]: I0114 00:56:27.583116 2388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:56:27.586270 kubelet[2388]: I0114 00:56:27.581806 2388 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:56:27.588222 kubelet[2388]: I0114 00:56:27.582938 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:56:27.591178 kubelet[2388]: I0114 00:56:27.591154 2388 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 00:56:27.591391 kubelet[2388]: I0114 00:56:27.591370 2388 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 00:56:27.591645 kubelet[2388]: I0114 00:56:27.591628 2388 reconciler.go:29] "Reconciler: start to sync state" Jan 14 00:56:27.592154 kubelet[2388]: I0114 00:56:27.591974 2388 server.go:310] "Adding debug handlers to kubelet server" Jan 14 00:56:27.594649 kubelet[2388]: E0114 00:56:27.593937 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:56:27.594649 kubelet[2388]: E0114 00:56:27.588208 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a72f20a8ea756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 00:56:27.581114198 +0000 UTC m=+1.103042562,LastTimestamp:2026-01-14 00:56:27.581114198 +0000 UTC m=+1.103042562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 00:56:27.595321 kubelet[2388]: E0114 00:56:27.595133 2388 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:56:27.596407 kubelet[2388]: E0114 00:56:27.596335 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Jan 14 00:56:27.597190 kubelet[2388]: E0114 00:56:27.597122 2388 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:56:27.600402 kubelet[2388]: I0114 00:56:27.600337 2388 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:56:27.600402 kubelet[2388]: I0114 00:56:27.600388 2388 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:56:27.600812 kubelet[2388]: I0114 00:56:27.600746 2388 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:56:27.634859 kubelet[2388]: I0114 00:56:27.634747 2388 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:56:27.634859 kubelet[2388]: I0114 00:56:27.634798 2388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:56:27.634859 kubelet[2388]: I0114 00:56:27.634822 2388 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:56:27.637726 kubelet[2388]: I0114 00:56:27.637681 2388 policy_none.go:49] "None policy: Start" Jan 14 00:56:27.637726 kubelet[2388]: I0114 00:56:27.637723 2388 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 00:56:27.637874 kubelet[2388]: I0114 00:56:27.637738 2388 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 00:56:27.639857 kubelet[2388]: I0114 00:56:27.639603 2388 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 00:56:27.641714 kubelet[2388]: I0114 00:56:27.641670 2388 policy_none.go:47] "Start" Jan 14 00:56:27.644010 kubelet[2388]: I0114 00:56:27.643929 2388 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 00:56:27.644010 kubelet[2388]: I0114 00:56:27.643978 2388 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 00:56:27.644010 kubelet[2388]: I0114 00:56:27.644005 2388 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 00:56:27.644603 kubelet[2388]: E0114 00:56:27.644148 2388 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:56:27.648790 kubelet[2388]: E0114 00:56:27.648705 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:56:27.652498 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 00:56:27.674754 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 00:56:27.680818 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 00:56:27.692067 kubelet[2388]: E0114 00:56:27.691930 2388 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:56:27.692549 kubelet[2388]: I0114 00:56:27.692217 2388 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:56:27.692549 kubelet[2388]: I0114 00:56:27.692252 2388 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:56:27.693617 kubelet[2388]: I0114 00:56:27.692893 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:56:27.694918 kubelet[2388]: E0114 00:56:27.694858 2388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:56:27.694989 kubelet[2388]: E0114 00:56:27.694927 2388 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 00:56:27.780601 systemd[1]: Created slice kubepods-burstable-pod64a03ddc0c6a33344aaef0ab4a62382d.slice - libcontainer container kubepods-burstable-pod64a03ddc0c6a33344aaef0ab4a62382d.slice. Jan 14 00:56:27.795896 kubelet[2388]: I0114 00:56:27.794318 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:56:27.796263 kubelet[2388]: E0114 00:56:27.796215 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 14 00:56:27.798012 kubelet[2388]: E0114 00:56:27.797871 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Jan 14 00:56:27.806999 kubelet[2388]: E0114 00:56:27.806267 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:27.813376 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 14 00:56:27.818547 kubelet[2388]: E0114 00:56:27.818414 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:27.824406 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 14 00:56:27.830664 kubelet[2388]: E0114 00:56:27.830579 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:27.894350 kubelet[2388]: I0114 00:56:27.893904 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:27.894350 kubelet[2388]: I0114 00:56:27.893993 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:27.894350 kubelet[2388]: I0114 00:56:27.894031 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:27.894350 kubelet[2388]: I0114 00:56:27.894070 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 14 00:56:27.894350 kubelet[2388]: I0114 00:56:27.894106 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64a03ddc0c6a33344aaef0ab4a62382d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"64a03ddc0c6a33344aaef0ab4a62382d\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:27.895237 kubelet[2388]: I0114 00:56:27.894130 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64a03ddc0c6a33344aaef0ab4a62382d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"64a03ddc0c6a33344aaef0ab4a62382d\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:27.895682 kubelet[2388]: I0114 00:56:27.895428 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:27.895682 kubelet[2388]: I0114 00:56:27.895577 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:27.895682 kubelet[2388]: I0114 00:56:27.895602 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64a03ddc0c6a33344aaef0ab4a62382d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"64a03ddc0c6a33344aaef0ab4a62382d\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:28.001223 kubelet[2388]: I0114 00:56:28.001138 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:56:28.002189 kubelet[2388]: E0114 00:56:28.002078 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 14 00:56:28.115531 kubelet[2388]: E0114 00:56:28.113785 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:28.115683 containerd[1615]: time="2026-01-14T00:56:28.115405403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:64a03ddc0c6a33344aaef0ab4a62382d,Namespace:kube-system,Attempt:0,}" Jan 14 00:56:28.123831 kubelet[2388]: E0114 00:56:28.123714 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:28.124757 containerd[1615]: time="2026-01-14T00:56:28.124563021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 14 00:56:28.136973 kubelet[2388]: E0114 00:56:28.136675 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:28.138051 containerd[1615]: time="2026-01-14T00:56:28.137943528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 14 00:56:28.200067 kubelet[2388]: E0114 00:56:28.198871 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Jan 14 00:56:28.405689 kubelet[2388]: I0114 00:56:28.404986 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:56:28.405689 kubelet[2388]: E0114 00:56:28.405543 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 14 00:56:28.441034 kubelet[2388]: E0114 00:56:28.440927 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:56:28.520873 kubelet[2388]: E0114 00:56:28.517550 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:56:28.693052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226890397.mount: Deactivated successfully. Jan 14 00:56:28.798866 containerd[1615]: time="2026-01-14T00:56:28.789827107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:56:28.805659 containerd[1615]: time="2026-01-14T00:56:28.805560034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:56:28.822931 containerd[1615]: time="2026-01-14T00:56:28.822090626Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:56:28.841523 containerd[1615]: time="2026-01-14T00:56:28.840312307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:56:28.843976 containerd[1615]: time="2026-01-14T00:56:28.843885720Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:56:28.846142 containerd[1615]: time="2026-01-14T00:56:28.845974463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:56:28.848586 containerd[1615]: time="2026-01-14T00:56:28.848201757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:56:28.849884 containerd[1615]: time="2026-01-14T00:56:28.849702911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 724.828781ms" Jan 14 00:56:28.852565 containerd[1615]: time="2026-01-14T00:56:28.850931088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:56:28.855567 containerd[1615]: time="2026-01-14T00:56:28.855119043Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 721.207943ms" Jan 14 00:56:28.869574 containerd[1615]: time="2026-01-14T00:56:28.869213197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 727.116763ms" Jan 14 00:56:28.960033 kubelet[2388]: E0114 00:56:28.957093 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:56:28.990554 containerd[1615]: time="2026-01-14T00:56:28.989557114Z" level=info msg="connecting to shim de685e35d0aaf60199c7fa38b0671b8d9d907a86472fb4f5415674bedc22b240" address="unix:///run/containerd/s/5b3ebb16bf69d4347982ca39ed305bc22c7cf2f7fc8ad6245e64b7cf9bcc1408" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:56:28.999401 containerd[1615]: time="2026-01-14T00:56:28.999244825Z" level=info msg="connecting to shim 935a26e14e7a0f7bea916ebcf8ba3b97eedd32a94439d907e9d6e03aad9acf61" address="unix:///run/containerd/s/e7bd30eef01b18a23342e9c93f8707fa13cd50ff0c14d88910df21670c53fb23" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:56:29.001229 kubelet[2388]: E0114 00:56:29.001122 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Jan 14 00:56:29.008763 containerd[1615]: time="2026-01-14T00:56:29.007812981Z" level=info msg="connecting to shim 72cfd9a8914d2268e36ffcbaa1d4d9bcbabbe4c76f6cbd4582bc69f499ef607e" address="unix:///run/containerd/s/fce6de3202811d099d26a69f5b14954b1ec516b82124392a8ae02ac14fbf577f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:56:29.133402 systemd[1]: Started cri-containerd-935a26e14e7a0f7bea916ebcf8ba3b97eedd32a94439d907e9d6e03aad9acf61.scope - libcontainer container 935a26e14e7a0f7bea916ebcf8ba3b97eedd32a94439d907e9d6e03aad9acf61. Jan 14 00:56:29.151970 systemd[1]: Started cri-containerd-de685e35d0aaf60199c7fa38b0671b8d9d907a86472fb4f5415674bedc22b240.scope - libcontainer container de685e35d0aaf60199c7fa38b0671b8d9d907a86472fb4f5415674bedc22b240. Jan 14 00:56:29.162341 kubelet[2388]: E0114 00:56:29.152518 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:56:29.163799 systemd[1]: Started cri-containerd-72cfd9a8914d2268e36ffcbaa1d4d9bcbabbe4c76f6cbd4582bc69f499ef607e.scope - libcontainer container 72cfd9a8914d2268e36ffcbaa1d4d9bcbabbe4c76f6cbd4582bc69f499ef607e. Jan 14 00:56:29.273110 kubelet[2388]: I0114 00:56:29.266180 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:56:29.279134 kubelet[2388]: E0114 00:56:29.278921 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jan 14 00:56:29.489677 containerd[1615]: time="2026-01-14T00:56:29.489521538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:64a03ddc0c6a33344aaef0ab4a62382d,Namespace:kube-system,Attempt:0,} returns sandbox id \"de685e35d0aaf60199c7fa38b0671b8d9d907a86472fb4f5415674bedc22b240\"" Jan 14 00:56:29.492491 kubelet[2388]: E0114 00:56:29.492316 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:29.495270 containerd[1615]: time="2026-01-14T00:56:29.495070176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"935a26e14e7a0f7bea916ebcf8ba3b97eedd32a94439d907e9d6e03aad9acf61\"" Jan 14 00:56:29.497227 kubelet[2388]: E0114 00:56:29.496859 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:29.538325 containerd[1615]: time="2026-01-14T00:56:29.538180315Z" level=info msg="CreateContainer within sandbox \"de685e35d0aaf60199c7fa38b0671b8d9d907a86472fb4f5415674bedc22b240\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 00:56:29.539140 containerd[1615]: time="2026-01-14T00:56:29.539088768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"72cfd9a8914d2268e36ffcbaa1d4d9bcbabbe4c76f6cbd4582bc69f499ef607e\"" Jan 14 00:56:29.541216 containerd[1615]: time="2026-01-14T00:56:29.541180606Z" level=info msg="CreateContainer within sandbox \"935a26e14e7a0f7bea916ebcf8ba3b97eedd32a94439d907e9d6e03aad9acf61\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 00:56:29.541893 kubelet[2388]: E0114 00:56:29.541784 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:29.558856 containerd[1615]: time="2026-01-14T00:56:29.558716014Z" level=info msg="Container 5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:29.586500 containerd[1615]: time="2026-01-14T00:56:29.585802729Z" level=info msg="CreateContainer within sandbox \"72cfd9a8914d2268e36ffcbaa1d4d9bcbabbe4c76f6cbd4582bc69f499ef607e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 00:56:29.597925 containerd[1615]: time="2026-01-14T00:56:29.597819635Z" level=info msg="CreateContainer within sandbox \"de685e35d0aaf60199c7fa38b0671b8d9d907a86472fb4f5415674bedc22b240\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79\"" Jan 14 00:56:29.601856 containerd[1615]: time="2026-01-14T00:56:29.601226834Z" level=info msg="StartContainer for \"5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79\"" Jan 14 00:56:29.604798 containerd[1615]: time="2026-01-14T00:56:29.604627236Z" level=info msg="connecting to shim 5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79" address="unix:///run/containerd/s/5b3ebb16bf69d4347982ca39ed305bc22c7cf2f7fc8ad6245e64b7cf9bcc1408" protocol=ttrpc version=3 Jan 14 00:56:29.669243 kubelet[2388]: E0114 00:56:29.669104 2388 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:56:29.685719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857444682.mount: Deactivated successfully. Jan 14 00:56:29.686282 containerd[1615]: time="2026-01-14T00:56:29.686174257Z" level=info msg="Container 62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:29.702951 containerd[1615]: time="2026-01-14T00:56:29.702845985Z" level=info msg="Container 404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:29.750148 systemd[1]: Started cri-containerd-5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79.scope - libcontainer container 5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79. Jan 14 00:56:29.753065 containerd[1615]: time="2026-01-14T00:56:29.751926811Z" level=info msg="CreateContainer within sandbox \"72cfd9a8914d2268e36ffcbaa1d4d9bcbabbe4c76f6cbd4582bc69f499ef607e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d\"" Jan 14 00:56:29.754090 containerd[1615]: time="2026-01-14T00:56:29.754014000Z" level=info msg="StartContainer for \"62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d\"" Jan 14 00:56:29.762115 containerd[1615]: time="2026-01-14T00:56:29.762025559Z" level=info msg="connecting to shim 62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d" address="unix:///run/containerd/s/fce6de3202811d099d26a69f5b14954b1ec516b82124392a8ae02ac14fbf577f" protocol=ttrpc version=3 Jan 14 00:56:29.769525 containerd[1615]: time="2026-01-14T00:56:29.769049280Z" level=info msg="CreateContainer within sandbox \"935a26e14e7a0f7bea916ebcf8ba3b97eedd32a94439d907e9d6e03aad9acf61\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26\"" Jan 14 00:56:29.776422 containerd[1615]: time="2026-01-14T00:56:29.774848498Z" level=info msg="StartContainer for \"404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26\"" Jan 14 00:56:29.778506 containerd[1615]: time="2026-01-14T00:56:29.776774617Z" level=info msg="connecting to shim 404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26" address="unix:///run/containerd/s/e7bd30eef01b18a23342e9c93f8707fa13cd50ff0c14d88910df21670c53fb23" protocol=ttrpc version=3 Jan 14 00:56:29.814987 systemd[1]: Started cri-containerd-404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26.scope - libcontainer container 404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26. Jan 14 00:56:29.862160 systemd[1]: Started cri-containerd-62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d.scope - libcontainer container 62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d. Jan 14 00:56:29.972291 containerd[1615]: time="2026-01-14T00:56:29.972219037Z" level=info msg="StartContainer for \"5b01b79e256c924cd5d5793d75bbd608ec26a303bdc376e88a590bb8146cdb79\" returns successfully" Jan 14 00:56:30.070355 containerd[1615]: time="2026-01-14T00:56:30.070149723Z" level=info msg="StartContainer for \"404c6b740411fc0fac3b1382b3beade7bd4cdab35b2013b2fd72ad0ed868dc26\" returns successfully" Jan 14 00:56:30.079847 containerd[1615]: time="2026-01-14T00:56:30.079164271Z" level=info msg="StartContainer for \"62ba6bb3e8bd9b728b897d56136d6673636b6ba97913b4f217cc78a0f0bf1d7d\" returns successfully" Jan 14 00:56:30.770536 kubelet[2388]: E0114 00:56:30.770303 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:30.770536 kubelet[2388]: E0114 00:56:30.772676 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:30.784100 kubelet[2388]: E0114 00:56:30.783911 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:30.784968 kubelet[2388]: E0114 00:56:30.784257 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:30.790807 kubelet[2388]: E0114 00:56:30.790777 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:30.791211 kubelet[2388]: E0114 00:56:30.791108 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:30.887326 kubelet[2388]: I0114 00:56:30.887248 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:56:31.822269 kubelet[2388]: E0114 00:56:31.822099 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:31.822269 kubelet[2388]: E0114 00:56:31.822419 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:31.824686 kubelet[2388]: E0114 00:56:31.824658 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:31.825010 kubelet[2388]: E0114 00:56:31.824980 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:31.827630 kubelet[2388]: E0114 00:56:31.827170 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:31.827630 kubelet[2388]: E0114 00:56:31.827297 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:32.818538 kubelet[2388]: E0114 00:56:32.818391 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:32.821186 kubelet[2388]: E0114 00:56:32.820972 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:56:32.821623 kubelet[2388]: E0114 00:56:32.821598 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:32.822124 kubelet[2388]: E0114 00:56:32.821916 2388 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:32.988551 kubelet[2388]: E0114 00:56:32.986851 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 00:56:33.144284 kubelet[2388]: I0114 00:56:33.143722 2388 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 00:56:33.180236 kubelet[2388]: E0114 00:56:33.180072 2388 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188a72f20a8ea756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 00:56:27.581114198 +0000 UTC m=+1.103042562,LastTimestamp:2026-01-14 00:56:27.581114198 +0000 UTC m=+1.103042562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 00:56:33.196075 kubelet[2388]: I0114 00:56:33.196033 2388 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:33.236662 kubelet[2388]: E0114 00:56:33.236409 2388 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:33.236662 kubelet[2388]: I0114 00:56:33.236512 2388 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:33.243813 kubelet[2388]: E0114 00:56:33.243626 2388 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:33.243813 kubelet[2388]: I0114 00:56:33.243656 2388 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 00:56:33.255092 kubelet[2388]: E0114 00:56:33.255045 2388 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 00:56:33.270387 kubelet[2388]: E0114 00:56:33.270051 2388 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188a72f20b82a8be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 00:56:27.597105342 +0000 UTC m=+1.119033697,LastTimestamp:2026-01-14 00:56:27.597105342 +0000 UTC m=+1.119033697,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 00:56:33.587303 kubelet[2388]: I0114 00:56:33.584630 2388 apiserver.go:52] "Watching apiserver" Jan 14 00:56:33.694192 kubelet[2388]: I0114 00:56:33.692411 2388 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 00:56:37.494019 systemd[1]: Reload requested from client PID 2682 ('systemctl') (unit session-6.scope)... Jan 14 00:56:37.494942 systemd[1]: Reloading... Jan 14 00:56:37.944079 zram_generator::config[2726]: No configuration found. Jan 14 00:56:38.554380 systemd[1]: Reloading finished in 1056 ms. Jan 14 00:56:38.615382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:38.632625 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 00:56:38.633068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:38.633218 systemd[1]: kubelet.service: Consumed 2.263s CPU time, 125.3M memory peak. Jan 14 00:56:38.638894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:38.997733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:39.011040 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:56:39.129631 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:56:39.129631 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:56:39.129631 kubelet[2775]: I0114 00:56:39.128150 2775 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:56:39.149252 kubelet[2775]: I0114 00:56:39.149032 2775 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 00:56:39.149252 kubelet[2775]: I0114 00:56:39.149083 2775 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:56:39.149252 kubelet[2775]: I0114 00:56:39.149154 2775 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 00:56:39.149252 kubelet[2775]: I0114 00:56:39.149175 2775 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:56:39.149579 kubelet[2775]: I0114 00:56:39.149517 2775 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:56:39.151688 kubelet[2775]: I0114 00:56:39.151125 2775 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 00:56:39.153691 kubelet[2775]: I0114 00:56:39.153597 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:56:39.162975 kubelet[2775]: I0114 00:56:39.162905 2775 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:56:39.172924 kubelet[2775]: I0114 00:56:39.172767 2775 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 00:56:39.173313 kubelet[2775]: I0114 00:56:39.173111 2775 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:56:39.173690 kubelet[2775]: I0114 00:56:39.173145 2775 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:56:39.173690 kubelet[2775]: I0114 00:56:39.173385 2775 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:56:39.173690 kubelet[2775]: I0114 00:56:39.173399 2775 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 00:56:39.173690 kubelet[2775]: I0114 00:56:39.173519 2775 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 00:56:39.177943 kubelet[2775]: I0114 00:56:39.176596 2775 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:56:39.177943 kubelet[2775]: I0114 00:56:39.176843 2775 kubelet.go:475] "Attempting to sync node with API server" Jan 14 00:56:39.177943 kubelet[2775]: I0114 00:56:39.176868 2775 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:56:39.177943 kubelet[2775]: I0114 00:56:39.176909 2775 kubelet.go:387] "Adding apiserver pod source" Jan 14 00:56:39.177943 kubelet[2775]: I0114 00:56:39.176944 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:56:39.182164 kubelet[2775]: I0114 00:56:39.181656 2775 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:56:39.182521 kubelet[2775]: I0114 00:56:39.182412 2775 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:56:39.182573 kubelet[2775]: I0114 00:56:39.182535 2775 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 00:56:39.203525 kubelet[2775]: I0114 00:56:39.202986 2775 server.go:1262] "Started kubelet" Jan 14 00:56:39.204073 kubelet[2775]: I0114 00:56:39.203977 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:56:39.204333 kubelet[2775]: I0114 00:56:39.203801 2775 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:56:39.205852 kubelet[2775]: I0114 00:56:39.204857 2775 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:56:39.209728 kubelet[2775]: I0114 00:56:39.209702 2775 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 00:56:39.210181 kubelet[2775]: I0114 00:56:39.210156 2775 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:56:39.212997 kubelet[2775]: I0114 00:56:39.210368 2775 server.go:310] "Adding debug handlers to kubelet server" Jan 14 00:56:39.215045 kubelet[2775]: I0114 00:56:39.214986 2775 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 00:56:39.215200 kubelet[2775]: I0114 00:56:39.215130 2775 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 00:56:39.218019 kubelet[2775]: I0114 00:56:39.217995 2775 reconciler.go:29] "Reconciler: start to sync state" Jan 14 00:56:39.218410 kubelet[2775]: I0114 00:56:39.218299 2775 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:56:39.219920 kubelet[2775]: I0114 00:56:39.219841 2775 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:56:39.220148 kubelet[2775]: I0114 00:56:39.220118 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:56:39.221088 kubelet[2775]: E0114 00:56:39.220877 2775 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:56:39.225137 kubelet[2775]: I0114 00:56:39.225064 2775 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:56:39.267551 kubelet[2775]: I0114 00:56:39.267287 2775 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 00:56:39.277708 kubelet[2775]: I0114 00:56:39.277603 2775 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 00:56:39.277708 kubelet[2775]: I0114 00:56:39.277638 2775 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 00:56:39.277708 kubelet[2775]: I0114 00:56:39.277671 2775 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 00:56:39.277962 kubelet[2775]: E0114 00:56:39.277742 2775 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:56:39.314968 kubelet[2775]: I0114 00:56:39.314892 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:56:39.314968 kubelet[2775]: I0114 00:56:39.314937 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:56:39.314968 kubelet[2775]: I0114 00:56:39.314961 2775 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:56:39.315205 kubelet[2775]: I0114 00:56:39.315114 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 00:56:39.315205 kubelet[2775]: I0114 00:56:39.315125 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 00:56:39.315205 kubelet[2775]: I0114 00:56:39.315142 2775 policy_none.go:49] "None policy: Start" Jan 14 00:56:39.315205 kubelet[2775]: I0114 00:56:39.315152 2775 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 00:56:39.315205 kubelet[2775]: I0114 00:56:39.315163 2775 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 00:56:39.315520 kubelet[2775]: I0114 00:56:39.315333 2775 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 14 00:56:39.315520 kubelet[2775]: I0114 00:56:39.315343 2775 policy_none.go:47] "Start" Jan 14 00:56:39.324430 kubelet[2775]: E0114 00:56:39.324356 2775 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:56:39.324895 kubelet[2775]: I0114 00:56:39.324808 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:56:39.324895 kubelet[2775]: I0114 00:56:39.324859 2775 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:56:39.326306 kubelet[2775]: I0114 00:56:39.326085 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:56:39.332884 kubelet[2775]: E0114 00:56:39.332806 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:56:39.380142 kubelet[2775]: I0114 00:56:39.380017 2775 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 00:56:39.381136 kubelet[2775]: I0114 00:56:39.380355 2775 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:39.383342 kubelet[2775]: I0114 00:56:39.380729 2775 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:39.418882 kubelet[2775]: I0114 00:56:39.418774 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64a03ddc0c6a33344aaef0ab4a62382d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"64a03ddc0c6a33344aaef0ab4a62382d\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:39.418882 kubelet[2775]: I0114 00:56:39.418840 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:39.418882 kubelet[2775]: I0114 00:56:39.418859 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:39.418882 kubelet[2775]: I0114 00:56:39.418878 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:39.418882 kubelet[2775]: I0114 00:56:39.418893 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64a03ddc0c6a33344aaef0ab4a62382d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"64a03ddc0c6a33344aaef0ab4a62382d\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:39.419197 kubelet[2775]: I0114 00:56:39.418906 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64a03ddc0c6a33344aaef0ab4a62382d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"64a03ddc0c6a33344aaef0ab4a62382d\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:39.419197 kubelet[2775]: I0114 00:56:39.418961 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:39.419197 kubelet[2775]: I0114 00:56:39.419051 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:56:39.419197 kubelet[2775]: I0114 00:56:39.419099 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 14 00:56:39.438139 kubelet[2775]: I0114 00:56:39.438007 2775 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:56:39.459532 kubelet[2775]: I0114 00:56:39.458564 2775 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 00:56:39.459532 kubelet[2775]: I0114 00:56:39.458798 2775 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 00:56:39.696980 kubelet[2775]: E0114 00:56:39.696613 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:39.700909 kubelet[2775]: E0114 00:56:39.700879 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:39.702962 kubelet[2775]: E0114 00:56:39.701595 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:40.183746 kubelet[2775]: I0114 00:56:40.183547 2775 apiserver.go:52] "Watching apiserver" Jan 14 00:56:40.215431 kubelet[2775]: I0114 00:56:40.215314 2775 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 00:56:40.310218 kubelet[2775]: I0114 00:56:40.309014 2775 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:40.311543 kubelet[2775]: E0114 00:56:40.311005 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:40.312740 kubelet[2775]: E0114 00:56:40.312609 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:40.332031 kubelet[2775]: E0114 00:56:40.331984 2775 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 00:56:40.337144 kubelet[2775]: E0114 00:56:40.335569 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:40.370798 kubelet[2775]: I0114 00:56:40.369556 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.369537461 podStartE2EDuration="1.369537461s" podCreationTimestamp="2026-01-14 00:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:56:40.351847448 +0000 UTC m=+1.328797271" watchObservedRunningTime="2026-01-14 00:56:40.369537461 +0000 UTC m=+1.346487274" Jan 14 00:56:40.370798 kubelet[2775]: I0114 00:56:40.369773 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.369762943 podStartE2EDuration="1.369762943s" podCreationTimestamp="2026-01-14 00:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:56:40.369245884 +0000 UTC m=+1.346195697" watchObservedRunningTime="2026-01-14 00:56:40.369762943 +0000 UTC m=+1.346712756" Jan 14 00:56:40.413044 kubelet[2775]: I0114 00:56:40.412647 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.412631133 podStartE2EDuration="1.412631133s" podCreationTimestamp="2026-01-14 00:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:56:40.388181586 +0000 UTC m=+1.365131409" watchObservedRunningTime="2026-01-14 00:56:40.412631133 +0000 UTC m=+1.389580936" Jan 14 00:56:41.447201 kubelet[2775]: E0114 00:56:41.446836 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:41.447201 kubelet[2775]: E0114 00:56:41.446861 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:41.451799 kubelet[2775]: E0114 00:56:41.449890 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:42.334890 sudo[1770]: pam_unix(sudo:session): session closed for user root Jan 14 00:56:42.345992 sshd[1769]: Connection closed by 10.0.0.1 port 58446 Jan 14 00:56:42.351796 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 14 00:56:42.370202 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:58446.service: Deactivated successfully. Jan 14 00:56:42.378589 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 00:56:42.379220 systemd[1]: session-6.scope: Consumed 12.546s CPU time, 223.9M memory peak. Jan 14 00:56:42.387407 systemd-logind[1589]: Session 6 logged out. Waiting for processes to exit. Jan 14 00:56:42.392998 systemd-logind[1589]: Removed session 6. Jan 14 00:56:42.510110 kubelet[2775]: E0114 00:56:42.503000 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:43.796023 kubelet[2775]: I0114 00:56:43.795955 2775 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 00:56:43.799568 containerd[1615]: time="2026-01-14T00:56:43.799319749Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 00:56:43.800977 kubelet[2775]: I0114 00:56:43.800764 2775 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 00:56:44.219818 systemd[1]: Created slice kubepods-besteffort-pod0c256569_e38f_438a_907c_1f90e244c1b6.slice - libcontainer container kubepods-besteffort-pod0c256569_e38f_438a_907c_1f90e244c1b6.slice. Jan 14 00:56:44.252127 kubelet[2775]: I0114 00:56:44.251203 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c256569-e38f-438a-907c-1f90e244c1b6-kube-proxy\") pod \"kube-proxy-92sb6\" (UID: \"0c256569-e38f-438a-907c-1f90e244c1b6\") " pod="kube-system/kube-proxy-92sb6" Jan 14 00:56:44.252127 kubelet[2775]: I0114 00:56:44.251323 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c256569-e38f-438a-907c-1f90e244c1b6-lib-modules\") pod \"kube-proxy-92sb6\" (UID: \"0c256569-e38f-438a-907c-1f90e244c1b6\") " pod="kube-system/kube-proxy-92sb6" Jan 14 00:56:44.252127 kubelet[2775]: I0114 00:56:44.251352 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56ts\" (UniqueName: \"kubernetes.io/projected/0c256569-e38f-438a-907c-1f90e244c1b6-kube-api-access-v56ts\") pod \"kube-proxy-92sb6\" (UID: \"0c256569-e38f-438a-907c-1f90e244c1b6\") " pod="kube-system/kube-proxy-92sb6" Jan 14 00:56:44.252127 kubelet[2775]: I0114 00:56:44.251382 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c256569-e38f-438a-907c-1f90e244c1b6-xtables-lock\") pod \"kube-proxy-92sb6\" (UID: \"0c256569-e38f-438a-907c-1f90e244c1b6\") " pod="kube-system/kube-proxy-92sb6" Jan 14 00:56:44.268997 systemd[1]: Created slice kubepods-burstable-pod592e62a9_3298_49d5_942e_5b9a965f762d.slice - libcontainer container kubepods-burstable-pod592e62a9_3298_49d5_942e_5b9a965f762d.slice. Jan 14 00:56:44.352683 kubelet[2775]: I0114 00:56:44.352534 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/592e62a9-3298-49d5-942e-5b9a965f762d-run\") pod \"kube-flannel-ds-gbqjk\" (UID: \"592e62a9-3298-49d5-942e-5b9a965f762d\") " pod="kube-flannel/kube-flannel-ds-gbqjk" Jan 14 00:56:44.352683 kubelet[2775]: I0114 00:56:44.352594 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxpv\" (UniqueName: \"kubernetes.io/projected/592e62a9-3298-49d5-942e-5b9a965f762d-kube-api-access-5bxpv\") pod \"kube-flannel-ds-gbqjk\" (UID: \"592e62a9-3298-49d5-942e-5b9a965f762d\") " pod="kube-flannel/kube-flannel-ds-gbqjk" Jan 14 00:56:44.352683 kubelet[2775]: I0114 00:56:44.352685 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/592e62a9-3298-49d5-942e-5b9a965f762d-cni-plugin\") pod \"kube-flannel-ds-gbqjk\" (UID: \"592e62a9-3298-49d5-942e-5b9a965f762d\") " pod="kube-flannel/kube-flannel-ds-gbqjk" Jan 14 00:56:44.352683 kubelet[2775]: I0114 00:56:44.352709 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/592e62a9-3298-49d5-942e-5b9a965f762d-flannel-cfg\") pod \"kube-flannel-ds-gbqjk\" (UID: \"592e62a9-3298-49d5-942e-5b9a965f762d\") " pod="kube-flannel/kube-flannel-ds-gbqjk" Jan 14 00:56:44.352683 kubelet[2775]: I0114 00:56:44.352729 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/592e62a9-3298-49d5-942e-5b9a965f762d-cni\") pod \"kube-flannel-ds-gbqjk\" (UID: \"592e62a9-3298-49d5-942e-5b9a965f762d\") " pod="kube-flannel/kube-flannel-ds-gbqjk" Jan 14 00:56:44.354213 kubelet[2775]: I0114 00:56:44.352750 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/592e62a9-3298-49d5-942e-5b9a965f762d-xtables-lock\") pod \"kube-flannel-ds-gbqjk\" (UID: \"592e62a9-3298-49d5-942e-5b9a965f762d\") " pod="kube-flannel/kube-flannel-ds-gbqjk" Jan 14 00:56:44.562127 kubelet[2775]: E0114 00:56:44.560930 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:44.564273 containerd[1615]: time="2026-01-14T00:56:44.563912617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92sb6,Uid:0c256569-e38f-438a-907c-1f90e244c1b6,Namespace:kube-system,Attempt:0,}" Jan 14 00:56:44.646290 kubelet[2775]: E0114 00:56:44.645918 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:44.651817 containerd[1615]: time="2026-01-14T00:56:44.651766974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gbqjk,Uid:592e62a9-3298-49d5-942e-5b9a965f762d,Namespace:kube-flannel,Attempt:0,}" Jan 14 00:56:45.033643 containerd[1615]: time="2026-01-14T00:56:45.029634169Z" level=info msg="connecting to shim 1f479323068928f28d9c2b580e3fdcfd77eeb540f0b995afc99047fde80282f3" address="unix:///run/containerd/s/d262116b09108beed696e2657dff64529bcddeae08216e0ff5601b9aac2431fb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:56:45.051672 containerd[1615]: time="2026-01-14T00:56:45.046113535Z" level=info msg="connecting to shim d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb" address="unix:///run/containerd/s/beb3fb12995b370c8f6dc5f4bcf552f21c230a4a01b12d2449be7c5230c0dd3e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:56:45.155311 kubelet[2775]: E0114 00:56:45.154005 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:45.574013 kubelet[2775]: E0114 00:56:45.573971 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:45.710789 systemd[1]: Started cri-containerd-1f479323068928f28d9c2b580e3fdcfd77eeb540f0b995afc99047fde80282f3.scope - libcontainer container 1f479323068928f28d9c2b580e3fdcfd77eeb540f0b995afc99047fde80282f3. Jan 14 00:56:45.759053 systemd[1]: Started cri-containerd-d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb.scope - libcontainer container d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb. Jan 14 00:56:46.408630 containerd[1615]: time="2026-01-14T00:56:46.408397069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92sb6,Uid:0c256569-e38f-438a-907c-1f90e244c1b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f479323068928f28d9c2b580e3fdcfd77eeb540f0b995afc99047fde80282f3\"" Jan 14 00:56:46.419965 kubelet[2775]: E0114 00:56:46.419415 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:46.557656 containerd[1615]: time="2026-01-14T00:56:46.510237103Z" level=info msg="CreateContainer within sandbox \"1f479323068928f28d9c2b580e3fdcfd77eeb540f0b995afc99047fde80282f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 00:56:46.698642 kubelet[2775]: E0114 00:56:46.662101 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:46.831724 containerd[1615]: time="2026-01-14T00:56:46.830058882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gbqjk,Uid:592e62a9-3298-49d5-942e-5b9a965f762d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\"" Jan 14 00:56:46.838509 containerd[1615]: time="2026-01-14T00:56:46.835987901Z" level=info msg="Container 82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:46.839258 kubelet[2775]: E0114 00:56:46.838310 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:46.847337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924199463.mount: Deactivated successfully. Jan 14 00:56:46.854108 containerd[1615]: time="2026-01-14T00:56:46.853833597Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 14 00:56:47.028542 containerd[1615]: time="2026-01-14T00:56:47.027896274Z" level=info msg="CreateContainer within sandbox \"1f479323068928f28d9c2b580e3fdcfd77eeb540f0b995afc99047fde80282f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb\"" Jan 14 00:56:47.142515 containerd[1615]: time="2026-01-14T00:56:47.142275543Z" level=info msg="StartContainer for \"82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb\"" Jan 14 00:56:47.152529 containerd[1615]: time="2026-01-14T00:56:47.152420064Z" level=info msg="connecting to shim 82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb" address="unix:///run/containerd/s/d262116b09108beed696e2657dff64529bcddeae08216e0ff5601b9aac2431fb" protocol=ttrpc version=3 Jan 14 00:56:47.300054 systemd[1]: Started cri-containerd-82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb.scope - libcontainer container 82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb. Jan 14 00:56:47.849516 containerd[1615]: time="2026-01-14T00:56:47.849229583Z" level=info msg="StartContainer for \"82dd272334b66419150ff72b8505c2000ed4bb6761ab089743159161abcadfcb\" returns successfully" Jan 14 00:56:47.887319 kubelet[2775]: E0114 00:56:47.861242 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:48.710801 kubelet[2775]: E0114 00:56:48.705991 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:48.710801 kubelet[2775]: E0114 00:56:48.709786 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:49.438203 kubelet[2775]: I0114 00:56:49.436236 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-92sb6" podStartSLOduration=5.436156572 podStartE2EDuration="5.436156572s" podCreationTimestamp="2026-01-14 00:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:56:48.946145894 +0000 UTC m=+9.923095738" watchObservedRunningTime="2026-01-14 00:56:49.436156572 +0000 UTC m=+10.413106385" Jan 14 00:56:49.508633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871458309.mount: Deactivated successfully. Jan 14 00:56:49.734525 kubelet[2775]: E0114 00:56:49.732159 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:49.783301 containerd[1615]: time="2026-01-14T00:56:49.783152211Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:49.786510 containerd[1615]: time="2026-01-14T00:56:49.786291011Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Jan 14 00:56:49.793524 containerd[1615]: time="2026-01-14T00:56:49.792634921Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:49.808742 containerd[1615]: time="2026-01-14T00:56:49.808507372Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:49.852319 containerd[1615]: time="2026-01-14T00:56:49.852063077Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 2.998086926s" Jan 14 00:56:49.852319 containerd[1615]: time="2026-01-14T00:56:49.852215638Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 14 00:56:49.868497 containerd[1615]: time="2026-01-14T00:56:49.865780688Z" level=info msg="CreateContainer within sandbox \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 14 00:56:49.968076 containerd[1615]: time="2026-01-14T00:56:49.967061066Z" level=info msg="Container 9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:49.996927 containerd[1615]: time="2026-01-14T00:56:49.996171119Z" level=info msg="CreateContainer within sandbox \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f\"" Jan 14 00:56:49.997298 containerd[1615]: time="2026-01-14T00:56:49.997220786Z" level=info msg="StartContainer for \"9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f\"" Jan 14 00:56:50.004839 containerd[1615]: time="2026-01-14T00:56:50.002575057Z" level=info msg="connecting to shim 9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f" address="unix:///run/containerd/s/beb3fb12995b370c8f6dc5f4bcf552f21c230a4a01b12d2449be7c5230c0dd3e" protocol=ttrpc version=3 Jan 14 00:56:50.091806 systemd[1]: Started cri-containerd-9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f.scope - libcontainer container 9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f. Jan 14 00:56:50.332913 systemd[1]: cri-containerd-9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f.scope: Deactivated successfully. Jan 14 00:56:50.375547 containerd[1615]: time="2026-01-14T00:56:50.375168189Z" level=info msg="received container exit event container_id:\"9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f\" id:\"9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f\" pid:3094 exited_at:{seconds:1768352210 nanos:343694759}" Jan 14 00:56:50.383255 containerd[1615]: time="2026-01-14T00:56:50.378690559Z" level=info msg="StartContainer for \"9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f\" returns successfully" Jan 14 00:56:50.569135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9601285293d87aa61f181cb1ab5c5f379ae746a161fc172fdb9b2a75570d707f-rootfs.mount: Deactivated successfully. Jan 14 00:56:50.779202 kubelet[2775]: E0114 00:56:50.776116 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:51.799662 kubelet[2775]: E0114 00:56:51.795513 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:51.803554 containerd[1615]: time="2026-01-14T00:56:51.802087127Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 14 00:56:56.406017 containerd[1615]: time="2026-01-14T00:56:56.405823989Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:56.409270 containerd[1615]: time="2026-01-14T00:56:56.409099078Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=14850467" Jan 14 00:56:56.416467 containerd[1615]: time="2026-01-14T00:56:56.416314099Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:56.425006 containerd[1615]: time="2026-01-14T00:56:56.424910579Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:56.427325 containerd[1615]: time="2026-01-14T00:56:56.427180153Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 4.625044555s" Jan 14 00:56:56.427325 containerd[1615]: time="2026-01-14T00:56:56.427246800Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 14 00:56:56.435143 containerd[1615]: time="2026-01-14T00:56:56.435027136Z" level=info msg="CreateContainer within sandbox \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 00:56:56.468006 containerd[1615]: time="2026-01-14T00:56:56.466381282Z" level=info msg="Container 20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:56.486757 containerd[1615]: time="2026-01-14T00:56:56.486644530Z" level=info msg="CreateContainer within sandbox \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641\"" Jan 14 00:56:56.489723 containerd[1615]: time="2026-01-14T00:56:56.487793824Z" level=info msg="StartContainer for \"20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641\"" Jan 14 00:56:56.489723 containerd[1615]: time="2026-01-14T00:56:56.489134915Z" level=info msg="connecting to shim 20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641" address="unix:///run/containerd/s/beb3fb12995b370c8f6dc5f4bcf552f21c230a4a01b12d2449be7c5230c0dd3e" protocol=ttrpc version=3 Jan 14 00:56:56.678212 systemd[1]: Started cri-containerd-20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641.scope - libcontainer container 20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641. Jan 14 00:56:56.899044 systemd[1]: cri-containerd-20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641.scope: Deactivated successfully. Jan 14 00:56:56.909643 containerd[1615]: time="2026-01-14T00:56:56.909576408Z" level=info msg="received container exit event container_id:\"20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641\" id:\"20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641\" pid:3204 exited_at:{seconds:1768352216 nanos:901886486}" Jan 14 00:56:56.929863 containerd[1615]: time="2026-01-14T00:56:56.929688477Z" level=info msg="StartContainer for \"20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641\" returns successfully" Jan 14 00:56:56.964928 kubelet[2775]: I0114 00:56:56.964592 2775 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 14 00:56:57.051350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20d1a8e86aaccfbf56f5350376a63d2b28534d6c3bc6b4226e98c4af134a3641-rootfs.mount: Deactivated successfully. Jan 14 00:56:57.150620 systemd[1]: Created slice kubepods-burstable-pod84817deb_911f_46a4_8d76_e754fb1518ec.slice - libcontainer container kubepods-burstable-pod84817deb_911f_46a4_8d76_e754fb1518ec.slice. Jan 14 00:56:57.167670 systemd[1]: Created slice kubepods-burstable-podd0bab8a6_13e8_4ae9_9fdd_5f1e25b48c44.slice - libcontainer container kubepods-burstable-podd0bab8a6_13e8_4ae9_9fdd_5f1e25b48c44.slice. Jan 14 00:56:57.262350 kubelet[2775]: I0114 00:56:57.261207 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84817deb-911f-46a4-8d76-e754fb1518ec-config-volume\") pod \"coredns-66bc5c9577-jq6kf\" (UID: \"84817deb-911f-46a4-8d76-e754fb1518ec\") " pod="kube-system/coredns-66bc5c9577-jq6kf" Jan 14 00:56:57.262350 kubelet[2775]: I0114 00:56:57.261298 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5tn\" (UniqueName: \"kubernetes.io/projected/84817deb-911f-46a4-8d76-e754fb1518ec-kube-api-access-7l5tn\") pod \"coredns-66bc5c9577-jq6kf\" (UID: \"84817deb-911f-46a4-8d76-e754fb1518ec\") " pod="kube-system/coredns-66bc5c9577-jq6kf" Jan 14 00:56:57.262350 kubelet[2775]: I0114 00:56:57.261342 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44-config-volume\") pod \"coredns-66bc5c9577-tw8bb\" (UID: \"d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44\") " pod="kube-system/coredns-66bc5c9577-tw8bb" Jan 14 00:56:57.262350 kubelet[2775]: I0114 00:56:57.261363 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5fjj\" (UniqueName: \"kubernetes.io/projected/d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44-kube-api-access-s5fjj\") pod \"coredns-66bc5c9577-tw8bb\" (UID: \"d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44\") " pod="kube-system/coredns-66bc5c9577-tw8bb" Jan 14 00:56:57.476651 kubelet[2775]: E0114 00:56:57.476185 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:57.481234 containerd[1615]: time="2026-01-14T00:56:57.480368859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jq6kf,Uid:84817deb-911f-46a4-8d76-e754fb1518ec,Namespace:kube-system,Attempt:0,}" Jan 14 00:56:57.487264 kubelet[2775]: E0114 00:56:57.486762 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:57.489796 containerd[1615]: time="2026-01-14T00:56:57.489089680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tw8bb,Uid:d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44,Namespace:kube-system,Attempt:0,}" Jan 14 00:56:57.648946 systemd[1]: run-netns-cni\x2d0b4f6ded\x2dcfc0\x2d3fc1\x2d82da\x2de3088541d651.mount: Deactivated successfully. Jan 14 00:56:57.655317 systemd[1]: run-netns-cni\x2d0b4784c1\x2d5370\x2d0962\x2d2a9a\x2dd3b245685750.mount: Deactivated successfully. Jan 14 00:56:57.655821 containerd[1615]: time="2026-01-14T00:56:57.655695318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jq6kf,Uid:84817deb-911f-46a4-8d76-e754fb1518ec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d172dfaa981ca54ef5e37e4d3f80f18085ec1c95e06e21cf2cfb1f572546acff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:56:57.656630 kubelet[2775]: E0114 00:56:57.656584 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d172dfaa981ca54ef5e37e4d3f80f18085ec1c95e06e21cf2cfb1f572546acff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:56:57.656786 kubelet[2775]: E0114 00:56:57.656759 2775 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d172dfaa981ca54ef5e37e4d3f80f18085ec1c95e06e21cf2cfb1f572546acff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-jq6kf" Jan 14 00:56:57.656984 kubelet[2775]: E0114 00:56:57.656956 2775 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d172dfaa981ca54ef5e37e4d3f80f18085ec1c95e06e21cf2cfb1f572546acff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-jq6kf" Jan 14 00:56:57.657320 kubelet[2775]: E0114 00:56:57.657195 2775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jq6kf_kube-system(84817deb-911f-46a4-8d76-e754fb1518ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jq6kf_kube-system(84817deb-911f-46a4-8d76-e754fb1518ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d172dfaa981ca54ef5e37e4d3f80f18085ec1c95e06e21cf2cfb1f572546acff\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-jq6kf" podUID="84817deb-911f-46a4-8d76-e754fb1518ec" Jan 14 00:56:57.663104 containerd[1615]: time="2026-01-14T00:56:57.662937226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tw8bb,Uid:d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8168249c75da07bead902f1c7f17e49e257b98f9322a396f761ac461106b70ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:56:57.663506 kubelet[2775]: E0114 00:56:57.663272 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8168249c75da07bead902f1c7f17e49e257b98f9322a396f761ac461106b70ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:56:57.663506 kubelet[2775]: E0114 00:56:57.663351 2775 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8168249c75da07bead902f1c7f17e49e257b98f9322a396f761ac461106b70ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-tw8bb" Jan 14 00:56:57.663506 kubelet[2775]: E0114 00:56:57.663374 2775 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8168249c75da07bead902f1c7f17e49e257b98f9322a396f761ac461106b70ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-tw8bb" Jan 14 00:56:57.663657 kubelet[2775]: E0114 00:56:57.663429 2775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tw8bb_kube-system(d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tw8bb_kube-system(d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8168249c75da07bead902f1c7f17e49e257b98f9322a396f761ac461106b70ad\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-tw8bb" podUID="d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44" Jan 14 00:56:57.948167 kubelet[2775]: E0114 00:56:57.947124 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:57.973194 containerd[1615]: time="2026-01-14T00:56:57.971072418Z" level=info msg="CreateContainer within sandbox \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 14 00:56:57.997585 containerd[1615]: time="2026-01-14T00:56:57.997293112Z" level=info msg="Container 25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:56:58.012010 containerd[1615]: time="2026-01-14T00:56:58.011914335Z" level=info msg="CreateContainer within sandbox \"d71ff57c8f609c90e87cfb4cdda58cbf2873c16c25747df1abf6f190f1d8e4eb\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef\"" Jan 14 00:56:58.014512 containerd[1615]: time="2026-01-14T00:56:58.013189304Z" level=info msg="StartContainer for \"25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef\"" Jan 14 00:56:58.014854 containerd[1615]: time="2026-01-14T00:56:58.014789766Z" level=info msg="connecting to shim 25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef" address="unix:///run/containerd/s/beb3fb12995b370c8f6dc5f4bcf552f21c230a4a01b12d2449be7c5230c0dd3e" protocol=ttrpc version=3 Jan 14 00:56:58.056781 systemd[1]: Started cri-containerd-25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef.scope - libcontainer container 25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef. Jan 14 00:56:58.160268 containerd[1615]: time="2026-01-14T00:56:58.156793221Z" level=info msg="StartContainer for \"25d9f6078fddebb7829e236f1faf9ce0db524e8741327b42f583ce13265808ef\" returns successfully" Jan 14 00:56:58.970740 kubelet[2775]: E0114 00:56:58.970318 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:56:59.336256 systemd-networkd[1508]: flannel.1: Link UP Jan 14 00:56:59.336272 systemd-networkd[1508]: flannel.1: Gained carrier Jan 14 00:56:59.973619 kubelet[2775]: E0114 00:56:59.971730 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:00.929883 systemd-networkd[1508]: flannel.1: Gained IPv6LL Jan 14 00:57:10.287399 kubelet[2775]: E0114 00:57:10.287252 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:10.289789 containerd[1615]: time="2026-01-14T00:57:10.289730839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tw8bb,Uid:d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44,Namespace:kube-system,Attempt:0,}" Jan 14 00:57:10.441358 systemd-networkd[1508]: cni0: Link UP Jan 14 00:57:10.441821 systemd-networkd[1508]: cni0: Gained carrier Jan 14 00:57:10.450053 systemd-networkd[1508]: cni0: Lost carrier Jan 14 00:57:10.474232 systemd-networkd[1508]: veth1481fa1e: Link UP Jan 14 00:57:10.480916 kernel: cni0: port 1(veth1481fa1e) entered blocking state Jan 14 00:57:10.481003 kernel: cni0: port 1(veth1481fa1e) entered disabled state Jan 14 00:57:10.481049 kernel: veth1481fa1e: entered allmulticast mode Jan 14 00:57:10.486610 kernel: veth1481fa1e: entered promiscuous mode Jan 14 00:57:10.510482 kernel: cni0: port 1(veth1481fa1e) entered blocking state Jan 14 00:57:10.510598 kernel: cni0: port 1(veth1481fa1e) entered forwarding state Jan 14 00:57:10.511013 systemd-networkd[1508]: veth1481fa1e: Gained carrier Jan 14 00:57:10.512192 systemd-networkd[1508]: cni0: Gained carrier Jan 14 00:57:10.526776 containerd[1615]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 14 00:57:10.526776 containerd[1615]: delegateAdd: netconf sent to delegate plugin: Jan 14 00:57:10.605262 containerd[1615]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-14T00:57:10.604385993Z" level=info msg="connecting to shim 7342b055236391f48b602a0e6a77c94ed322ba7dcc5b911cab9e35ddb3588c06" address="unix:///run/containerd/s/77946a72b4229ed235dc091199381b362d9dd6a4888acf95b4d7b952c6669dba" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:57:10.731994 systemd[1]: Started cri-containerd-7342b055236391f48b602a0e6a77c94ed322ba7dcc5b911cab9e35ddb3588c06.scope - libcontainer container 7342b055236391f48b602a0e6a77c94ed322ba7dcc5b911cab9e35ddb3588c06. Jan 14 00:57:10.795781 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 00:57:10.887177 containerd[1615]: time="2026-01-14T00:57:10.886141010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tw8bb,Uid:d0bab8a6-13e8-4ae9-9fdd-5f1e25b48c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"7342b055236391f48b602a0e6a77c94ed322ba7dcc5b911cab9e35ddb3588c06\"" Jan 14 00:57:10.888789 kubelet[2775]: E0114 00:57:10.888703 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:10.916395 containerd[1615]: time="2026-01-14T00:57:10.916123370Z" level=info msg="CreateContainer within sandbox \"7342b055236391f48b602a0e6a77c94ed322ba7dcc5b911cab9e35ddb3588c06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:57:10.968384 containerd[1615]: time="2026-01-14T00:57:10.967313031Z" level=info msg="Container b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:57:11.032707 containerd[1615]: time="2026-01-14T00:57:11.032134656Z" level=info msg="CreateContainer within sandbox \"7342b055236391f48b602a0e6a77c94ed322ba7dcc5b911cab9e35ddb3588c06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92\"" Jan 14 00:57:11.042407 containerd[1615]: time="2026-01-14T00:57:11.042347161Z" level=info msg="StartContainer for \"b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92\"" Jan 14 00:57:11.050785 containerd[1615]: time="2026-01-14T00:57:11.048549327Z" level=info msg="connecting to shim b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92" address="unix:///run/containerd/s/77946a72b4229ed235dc091199381b362d9dd6a4888acf95b4d7b952c6669dba" protocol=ttrpc version=3 Jan 14 00:57:11.166155 systemd[1]: Started cri-containerd-b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92.scope - libcontainer container b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92. Jan 14 00:57:11.279880 containerd[1615]: time="2026-01-14T00:57:11.279697416Z" level=info msg="StartContainer for \"b196538d33883507789dc614c4a6f16692c5d595ac10bd343d3fcbb31646cb92\" returns successfully" Jan 14 00:57:11.286358 kubelet[2775]: E0114 00:57:11.285026 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:11.287370 containerd[1615]: time="2026-01-14T00:57:11.285424679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jq6kf,Uid:84817deb-911f-46a4-8d76-e754fb1518ec,Namespace:kube-system,Attempt:0,}" Jan 14 00:57:11.382201 systemd-networkd[1508]: vetha572f6a2: Link UP Jan 14 00:57:11.392582 kernel: cni0: port 2(vetha572f6a2) entered blocking state Jan 14 00:57:11.392673 kernel: cni0: port 2(vetha572f6a2) entered disabled state Jan 14 00:57:11.395375 kernel: vetha572f6a2: entered allmulticast mode Jan 14 00:57:11.401816 kernel: vetha572f6a2: entered promiscuous mode Jan 14 00:57:11.510353 kernel: cni0: port 2(vetha572f6a2) entered blocking state Jan 14 00:57:11.510701 kernel: cni0: port 2(vetha572f6a2) entered forwarding state Jan 14 00:57:11.509544 systemd-networkd[1508]: vetha572f6a2: Gained carrier Jan 14 00:57:11.631882 systemd-networkd[1508]: cni0: Gained IPv6LL Jan 14 00:57:11.668348 containerd[1615]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001067f0), "name":"cbr0", "type":"bridge"} Jan 14 00:57:11.668348 containerd[1615]: delegateAdd: netconf sent to delegate plugin: Jan 14 00:57:11.749188 systemd-networkd[1508]: veth1481fa1e: Gained IPv6LL Jan 14 00:57:11.823808 containerd[1615]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-14T00:57:11.823694289Z" level=info msg="connecting to shim 91201e12680e217324f925e87cf43f75b9fcc325214b040ce0e30ef5690a5989" address="unix:///run/containerd/s/b8fdb7cb297cf180660a8dd1d2d7f2a19e9fa922aa9f68a85dbb043d861a72f5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:57:11.976192 systemd[1]: Started cri-containerd-91201e12680e217324f925e87cf43f75b9fcc325214b040ce0e30ef5690a5989.scope - libcontainer container 91201e12680e217324f925e87cf43f75b9fcc325214b040ce0e30ef5690a5989. Jan 14 00:57:12.038381 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 00:57:12.257318 kubelet[2775]: E0114 00:57:12.243391 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:12.288417 containerd[1615]: time="2026-01-14T00:57:12.288077593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jq6kf,Uid:84817deb-911f-46a4-8d76-e754fb1518ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"91201e12680e217324f925e87cf43f75b9fcc325214b040ce0e30ef5690a5989\"" Jan 14 00:57:12.294876 kubelet[2775]: E0114 00:57:12.294582 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:12.305996 kubelet[2775]: I0114 00:57:12.305802 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-gbqjk" podStartSLOduration=18.728311786 podStartE2EDuration="28.305781638s" podCreationTimestamp="2026-01-14 00:56:44 +0000 UTC" firstStartedPulling="2026-01-14 00:56:46.852949151 +0000 UTC m=+7.829898964" lastFinishedPulling="2026-01-14 00:56:56.430419012 +0000 UTC m=+17.407368816" observedRunningTime="2026-01-14 00:56:59.045356661 +0000 UTC m=+20.022306484" watchObservedRunningTime="2026-01-14 00:57:12.305781638 +0000 UTC m=+33.282731451" Jan 14 00:57:12.307108 kubelet[2775]: I0114 00:57:12.305998 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tw8bb" podStartSLOduration=27.305990456 podStartE2EDuration="27.305990456s" podCreationTimestamp="2026-01-14 00:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:57:12.305310133 +0000 UTC m=+33.282259956" watchObservedRunningTime="2026-01-14 00:57:12.305990456 +0000 UTC m=+33.282940269" Jan 14 00:57:12.325337 containerd[1615]: time="2026-01-14T00:57:12.325283078Z" level=info msg="CreateContainer within sandbox \"91201e12680e217324f925e87cf43f75b9fcc325214b040ce0e30ef5690a5989\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:57:12.377215 containerd[1615]: time="2026-01-14T00:57:12.377034535Z" level=info msg="Container b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:57:12.407304 containerd[1615]: time="2026-01-14T00:57:12.407089955Z" level=info msg="CreateContainer within sandbox \"91201e12680e217324f925e87cf43f75b9fcc325214b040ce0e30ef5690a5989\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea\"" Jan 14 00:57:12.408983 containerd[1615]: time="2026-01-14T00:57:12.408886873Z" level=info msg="StartContainer for \"b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea\"" Jan 14 00:57:12.410861 containerd[1615]: time="2026-01-14T00:57:12.410770616Z" level=info msg="connecting to shim b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea" address="unix:///run/containerd/s/b8fdb7cb297cf180660a8dd1d2d7f2a19e9fa922aa9f68a85dbb043d861a72f5" protocol=ttrpc version=3 Jan 14 00:57:12.485950 systemd[1]: Started cri-containerd-b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea.scope - libcontainer container b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea. Jan 14 00:57:12.667617 containerd[1615]: time="2026-01-14T00:57:12.667552723Z" level=info msg="StartContainer for \"b81dbe7625ad39a47b571f08ea431a5b9caf7ca1b020dc3e3cfde645452812ea\" returns successfully" Jan 14 00:57:13.258818 kubelet[2775]: E0114 00:57:13.256820 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:13.258818 kubelet[2775]: E0114 00:57:13.257595 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:13.443971 kubelet[2775]: I0114 00:57:13.442419 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jq6kf" podStartSLOduration=29.442392538 podStartE2EDuration="29.442392538s" podCreationTimestamp="2026-01-14 00:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:57:13.344511116 +0000 UTC m=+34.321460939" watchObservedRunningTime="2026-01-14 00:57:13.442392538 +0000 UTC m=+34.419342340" Jan 14 00:57:13.540242 systemd-networkd[1508]: vetha572f6a2: Gained IPv6LL Jan 14 00:57:14.264884 kubelet[2775]: E0114 00:57:14.264553 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:14.268521 kubelet[2775]: E0114 00:57:14.267409 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:57:15.292745 kubelet[2775]: E0114 00:57:15.291821 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:06.282520 kubelet[2775]: E0114 00:58:06.280921 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:09.288587 kubelet[2775]: E0114 00:58:09.287775 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:15.311418 kubelet[2775]: E0114 00:58:15.302873 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:16.352427 kubelet[2775]: E0114 00:58:16.311648 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:18.309634 kubelet[2775]: E0114 00:58:18.307559 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:21.747111 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Jan 14 00:58:22.304116 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:22.344740 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:22.660101 systemd-logind[1589]: New session 7 of user core. Jan 14 00:58:22.666787 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 00:58:24.041427 sshd[3966]: Connection closed by 10.0.0.1 port 52378 Jan 14 00:58:24.015998 sshd-session[3959]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:24.144101 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:52378.service: Deactivated successfully. Jan 14 00:58:24.155418 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 00:58:24.159026 systemd-logind[1589]: Session 7 logged out. Waiting for processes to exit. Jan 14 00:58:24.162835 systemd-logind[1589]: Removed session 7. Jan 14 00:58:29.195700 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:56248.service - OpenSSH per-connection server daemon (10.0.0.1:56248). Jan 14 00:58:29.718811 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 56248 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:29.860002 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:30.710210 systemd-logind[1589]: New session 8 of user core. Jan 14 00:58:30.771936 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 00:58:34.159299 sshd[4008]: Connection closed by 10.0.0.1 port 56248 Jan 14 00:58:34.162847 sshd-session[4003]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:34.178781 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:56248.service: Deactivated successfully. Jan 14 00:58:34.193044 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 00:58:34.195689 systemd[1]: session-8.scope: Consumed 1.110s CPU time, 15.4M memory peak. Jan 14 00:58:34.205623 systemd-logind[1589]: Session 8 logged out. Waiting for processes to exit. Jan 14 00:58:34.221251 systemd-logind[1589]: Removed session 8. Jan 14 00:58:35.284563 kubelet[2775]: E0114 00:58:35.284048 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:35.284563 kubelet[2775]: E0114 00:58:35.284712 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:39.194924 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:51868.service - OpenSSH per-connection server daemon (10.0.0.1:51868). Jan 14 00:58:39.337109 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 51868 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:39.340416 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:39.352366 systemd-logind[1589]: New session 9 of user core. Jan 14 00:58:39.362173 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 00:58:39.558335 sshd[4069]: Connection closed by 10.0.0.1 port 51868 Jan 14 00:58:39.559595 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:39.570302 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:51868.service: Deactivated successfully. Jan 14 00:58:39.573980 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 00:58:39.579573 systemd-logind[1589]: Session 9 logged out. Waiting for processes to exit. Jan 14 00:58:39.582254 systemd-logind[1589]: Removed session 9. Jan 14 00:58:44.604127 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:46606.service - OpenSSH per-connection server daemon (10.0.0.1:46606). Jan 14 00:58:44.808723 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 46606 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:44.808159 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:44.847542 systemd-logind[1589]: New session 10 of user core. Jan 14 00:58:44.872965 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 00:58:45.272624 sshd[4110]: Connection closed by 10.0.0.1 port 46606 Jan 14 00:58:45.274023 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:45.286804 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:46606.service: Deactivated successfully. Jan 14 00:58:45.293028 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 00:58:45.299759 systemd-logind[1589]: Session 10 logged out. Waiting for processes to exit. Jan 14 00:58:45.303310 systemd-logind[1589]: Removed session 10. Jan 14 00:58:50.311789 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:46614.service - OpenSSH per-connection server daemon (10.0.0.1:46614). Jan 14 00:58:50.472179 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 46614 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:50.484357 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:50.511545 systemd-logind[1589]: New session 11 of user core. Jan 14 00:58:50.522776 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 00:58:50.898957 sshd[4151]: Connection closed by 10.0.0.1 port 46614 Jan 14 00:58:50.903246 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:50.919929 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:46614.service: Deactivated successfully. Jan 14 00:58:50.939161 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 00:58:50.952835 systemd-logind[1589]: Session 11 logged out. Waiting for processes to exit. Jan 14 00:58:50.959833 systemd-logind[1589]: Removed session 11. Jan 14 00:58:55.925824 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:58722.service - OpenSSH per-connection server daemon (10.0.0.1:58722). Jan 14 00:58:56.026682 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 58722 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:56.029286 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:56.038028 systemd-logind[1589]: New session 12 of user core. Jan 14 00:58:56.049114 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 00:58:56.201221 sshd[4192]: Connection closed by 10.0.0.1 port 58722 Jan 14 00:58:56.201006 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:56.221887 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:58722.service: Deactivated successfully. Jan 14 00:58:56.226203 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 00:58:56.227599 systemd-logind[1589]: Session 12 logged out. Waiting for processes to exit. Jan 14 00:58:56.232110 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:58724.service - OpenSSH per-connection server daemon (10.0.0.1:58724). Jan 14 00:58:56.234319 systemd-logind[1589]: Removed session 12. Jan 14 00:58:56.327256 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 58724 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:56.331974 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:56.356759 systemd-logind[1589]: New session 13 of user core. Jan 14 00:58:56.365986 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 00:58:56.629624 sshd[4214]: Connection closed by 10.0.0.1 port 58724 Jan 14 00:58:56.630252 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:56.644019 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:58724.service: Deactivated successfully. Jan 14 00:58:56.646875 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 00:58:56.650657 systemd-logind[1589]: Session 13 logged out. Waiting for processes to exit. Jan 14 00:58:56.655316 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:58732.service - OpenSSH per-connection server daemon (10.0.0.1:58732). Jan 14 00:58:56.661164 systemd-logind[1589]: Removed session 13. Jan 14 00:58:56.762002 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 58732 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:58:56.764052 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:58:56.777308 systemd-logind[1589]: New session 14 of user core. Jan 14 00:58:56.782736 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 00:58:56.920576 sshd[4230]: Connection closed by 10.0.0.1 port 58732 Jan 14 00:58:56.921760 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:56.928304 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:58732.service: Deactivated successfully. Jan 14 00:58:56.931556 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 00:58:56.933158 systemd-logind[1589]: Session 14 logged out. Waiting for processes to exit. Jan 14 00:58:56.935408 systemd-logind[1589]: Removed session 14. Jan 14 00:59:01.965380 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:58832.service - OpenSSH per-connection server daemon (10.0.0.1:58832). Jan 14 00:59:02.070693 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 58832 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:02.074920 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:02.107989 systemd-logind[1589]: New session 15 of user core. Jan 14 00:59:02.133609 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 00:59:02.322356 sshd[4268]: Connection closed by 10.0.0.1 port 58832 Jan 14 00:59:02.322683 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:02.331124 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:58832.service: Deactivated successfully. Jan 14 00:59:02.334286 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 00:59:02.338301 systemd-logind[1589]: Session 15 logged out. Waiting for processes to exit. Jan 14 00:59:02.342995 systemd-logind[1589]: Removed session 15. Jan 14 00:59:07.355646 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:49728.service - OpenSSH per-connection server daemon (10.0.0.1:49728). Jan 14 00:59:07.487684 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 49728 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:07.491540 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:07.516147 systemd-logind[1589]: New session 16 of user core. Jan 14 00:59:07.542608 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 00:59:07.789511 sshd[4305]: Connection closed by 10.0.0.1 port 49728 Jan 14 00:59:07.789746 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:07.827226 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:49728.service: Deactivated successfully. Jan 14 00:59:07.830751 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 00:59:07.836196 systemd-logind[1589]: Session 16 logged out. Waiting for processes to exit. Jan 14 00:59:07.839699 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:49738.service - OpenSSH per-connection server daemon (10.0.0.1:49738). Jan 14 00:59:07.842811 systemd-logind[1589]: Removed session 16. Jan 14 00:59:07.971350 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 49738 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:07.981902 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:08.015745 systemd-logind[1589]: New session 17 of user core. Jan 14 00:59:08.034747 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 00:59:08.878955 sshd[4323]: Connection closed by 10.0.0.1 port 49738 Jan 14 00:59:08.881738 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:08.902214 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:49738.service: Deactivated successfully. Jan 14 00:59:08.906756 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 00:59:08.909885 systemd-logind[1589]: Session 17 logged out. Waiting for processes to exit. Jan 14 00:59:08.914101 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:49752.service - OpenSSH per-connection server daemon (10.0.0.1:49752). Jan 14 00:59:08.927595 systemd-logind[1589]: Removed session 17. Jan 14 00:59:09.147113 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 49752 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:09.154919 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:09.177682 systemd-logind[1589]: New session 18 of user core. Jan 14 00:59:09.199767 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 00:59:10.381561 sshd[4339]: Connection closed by 10.0.0.1 port 49752 Jan 14 00:59:10.382741 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:10.398177 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:49752.service: Deactivated successfully. Jan 14 00:59:10.401116 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 00:59:10.402706 systemd-logind[1589]: Session 18 logged out. Waiting for processes to exit. Jan 14 00:59:10.406835 systemd-logind[1589]: Removed session 18. Jan 14 00:59:10.409737 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:49764.service - OpenSSH per-connection server daemon (10.0.0.1:49764). Jan 14 00:59:10.597001 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 49764 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:10.603994 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:10.634558 systemd-logind[1589]: New session 19 of user core. Jan 14 00:59:10.648810 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 00:59:12.778021 sshd[4376]: Connection closed by 10.0.0.1 port 49764 Jan 14 00:59:12.787507 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:12.991259 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:49764.service: Deactivated successfully. Jan 14 00:59:13.107903 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 00:59:13.132938 systemd[1]: session-19.scope: Consumed 1.160s CPU time, 28.3M memory peak. Jan 14 00:59:13.212847 systemd-logind[1589]: Session 19 logged out. Waiting for processes to exit. Jan 14 00:59:13.272957 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:33384.service - OpenSSH per-connection server daemon (10.0.0.1:33384). Jan 14 00:59:13.339041 systemd-logind[1589]: Removed session 19. Jan 14 00:59:16.894804 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 33384 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:16.899400 kubelet[2775]: E0114 00:59:16.899292 2775 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.608s" Jan 14 00:59:16.900966 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:16.904770 kubelet[2775]: E0114 00:59:16.904742 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:17.036102 systemd-logind[1589]: New session 20 of user core. Jan 14 00:59:17.059594 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 00:59:19.197634 sshd[4419]: Connection closed by 10.0.0.1 port 33384 Jan 14 00:59:19.201935 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:19.311515 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:33384.service: Deactivated successfully. Jan 14 00:59:19.379788 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 00:59:19.454373 systemd[1]: session-20.scope: Consumed 1.180s CPU time, 16.7M memory peak. Jan 14 00:59:19.582931 systemd-logind[1589]: Session 20 logged out. Waiting for processes to exit. Jan 14 00:59:19.655043 systemd-logind[1589]: Removed session 20. Jan 14 00:59:20.584103 kubelet[2775]: E0114 00:59:20.552067 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:23.462968 kubelet[2775]: E0114 00:59:23.460794 2775 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.093s" Jan 14 00:59:24.948879 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:50066.service - OpenSSH per-connection server daemon (10.0.0.1:50066). Jan 14 00:59:25.444789 kubelet[2775]: E0114 00:59:25.444703 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:26.571113 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 50066 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:26.599588 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:26.635825 systemd-logind[1589]: New session 21 of user core. Jan 14 00:59:26.652793 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 00:59:26.938583 sshd[4458]: Connection closed by 10.0.0.1 port 50066 Jan 14 00:59:26.934329 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:26.948076 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:50066.service: Deactivated successfully. Jan 14 00:59:26.954526 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 00:59:26.959365 systemd-logind[1589]: Session 21 logged out. Waiting for processes to exit. Jan 14 00:59:26.964745 systemd-logind[1589]: Removed session 21. Jan 14 00:59:28.280774 kubelet[2775]: E0114 00:59:28.279835 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:31.953302 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:50198.service - OpenSSH per-connection server daemon (10.0.0.1:50198). Jan 14 00:59:32.059479 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 50198 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:32.062240 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:32.075987 systemd-logind[1589]: New session 22 of user core. Jan 14 00:59:32.083826 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 00:59:32.221163 sshd[4495]: Connection closed by 10.0.0.1 port 50198 Jan 14 00:59:32.221985 sshd-session[4491]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:32.230662 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:50198.service: Deactivated successfully. Jan 14 00:59:32.234853 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 00:59:32.238810 systemd-logind[1589]: Session 22 logged out. Waiting for processes to exit. Jan 14 00:59:32.243877 systemd-logind[1589]: Removed session 22. Jan 14 00:59:37.241329 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:32844.service - OpenSSH per-connection server daemon (10.0.0.1:32844). Jan 14 00:59:37.336902 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 32844 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:37.338528 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:37.356275 systemd-logind[1589]: New session 23 of user core. Jan 14 00:59:37.364372 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 00:59:37.524996 sshd[4534]: Connection closed by 10.0.0.1 port 32844 Jan 14 00:59:37.525363 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:37.536539 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:32844.service: Deactivated successfully. Jan 14 00:59:37.541944 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 00:59:37.547111 systemd-logind[1589]: Session 23 logged out. Waiting for processes to exit. Jan 14 00:59:37.549405 systemd-logind[1589]: Removed session 23. Jan 14 00:59:40.280113 kubelet[2775]: E0114 00:59:40.279782 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:42.547952 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:35320.service - OpenSSH per-connection server daemon (10.0.0.1:35320). Jan 14 00:59:42.644608 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 35320 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:42.647789 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:42.656953 systemd-logind[1589]: New session 24 of user core. Jan 14 00:59:42.665919 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 00:59:42.782708 sshd[4573]: Connection closed by 10.0.0.1 port 35320 Jan 14 00:59:42.783310 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:42.792095 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:35320.service: Deactivated successfully. Jan 14 00:59:42.795336 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 00:59:42.799903 systemd-logind[1589]: Session 24 logged out. Waiting for processes to exit. Jan 14 00:59:42.803083 systemd-logind[1589]: Removed session 24. Jan 14 00:59:43.282801 kubelet[2775]: E0114 00:59:43.281248 2775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:47.804260 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:35326.service - OpenSSH per-connection server daemon (10.0.0.1:35326). Jan 14 00:59:47.884138 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 35326 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:59:47.886698 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:47.893928 systemd-logind[1589]: New session 25 of user core. Jan 14 00:59:47.904726 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 00:59:48.000746 sshd[4610]: Connection closed by 10.0.0.1 port 35326 Jan 14 00:59:48.001222 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:48.008035 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:35326.service: Deactivated successfully. Jan 14 00:59:48.011229 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 00:59:48.013277 systemd-logind[1589]: Session 25 logged out. Waiting for processes to exit. Jan 14 00:59:48.015509 systemd-logind[1589]: Removed session 25.