Jan 14 00:54:34.749065 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:15:29 -00 2026 Jan 14 00:54:34.749993 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:54:34.750114 kernel: BIOS-provided physical RAM map: Jan 14 00:54:34.750122 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 00:54:34.750128 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 00:54:34.750134 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 00:54:34.750141 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 00:54:34.750147 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 00:54:34.750452 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 00:54:34.750460 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 00:54:34.750585 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 00:54:34.750596 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 00:54:34.750605 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 00:54:34.750614 kernel: NX (Execute Disable) protection: active Jan 14 00:54:34.750626 kernel: APIC: Static calls initialized Jan 14 00:54:34.750882 kernel: SMBIOS 2.8 present. Jan 14 00:54:34.750997 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 00:54:34.751005 kernel: DMI: Memory slots populated: 1/1 Jan 14 00:54:34.751011 kernel: Hypervisor detected: KVM Jan 14 00:54:34.751018 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 00:54:34.751025 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 00:54:34.751031 kernel: kvm-clock: using sched offset of 17287021773 cycles Jan 14 00:54:34.751040 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 00:54:34.751047 kernel: tsc: Detected 2445.426 MHz processor Jan 14 00:54:34.751163 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 00:54:34.751171 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 00:54:34.751179 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 00:54:34.751186 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 00:54:34.751193 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 00:54:34.751200 kernel: Using GB pages for direct mapping Jan 14 00:54:34.751207 kernel: ACPI: Early table checksum verification disabled Jan 14 00:54:34.751556 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 00:54:34.751569 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.751583 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.751594 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.751604 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 00:54:34.751613 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.751623 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.751885 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.751903 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:34.752046 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 00:54:34.752062 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 00:54:34.752072 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 00:54:34.752083 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 00:54:34.752230 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 00:54:34.752474 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 00:54:34.752487 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 00:54:34.752497 kernel: No NUMA configuration found Jan 14 00:54:34.752509 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 00:54:34.752523 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 00:54:34.752792 kernel: Zone ranges: Jan 14 00:54:34.752804 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 00:54:34.752815 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 00:54:34.752827 kernel: Normal empty Jan 14 00:54:34.752839 kernel: Device empty Jan 14 00:54:34.752851 kernel: Movable zone start for each node Jan 14 00:54:34.752862 kernel: Early memory node ranges Jan 14 00:54:34.753001 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 00:54:34.753014 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 00:54:34.753026 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 00:54:34.753038 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 00:54:34.753052 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 00:54:34.753174 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 00:54:34.753190 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 00:54:34.753202 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 00:54:34.753570 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 00:54:34.753582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 00:54:34.753817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 00:54:34.753833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 00:54:34.753846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 00:54:34.753856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 00:54:34.753866 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 00:54:34.754005 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 00:54:34.754018 kernel: TSC deadline timer available Jan 14 00:54:34.754030 kernel: CPU topo: Max. logical packages: 1 Jan 14 00:54:34.754040 kernel: CPU topo: Max. logical dies: 1 Jan 14 00:54:34.754050 kernel: CPU topo: Max. dies per package: 1 Jan 14 00:54:34.754060 kernel: CPU topo: Max. threads per core: 1 Jan 14 00:54:34.754072 kernel: CPU topo: Num. cores per package: 4 Jan 14 00:54:34.754778 kernel: CPU topo: Num. threads per package: 4 Jan 14 00:54:34.754792 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 00:54:34.754803 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 00:54:34.754815 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 00:54:34.754828 kernel: kvm-guest: setup PV sched yield Jan 14 00:54:34.754842 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 00:54:34.754853 kernel: Booting paravirtualized kernel on KVM Jan 14 00:54:34.754996 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 00:54:34.755010 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 00:54:34.755020 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 00:54:34.755031 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 00:54:34.755042 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 00:54:34.755055 kernel: kvm-guest: PV spinlocks enabled Jan 14 00:54:34.755068 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 00:54:34.755212 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:54:34.755226 kernel: random: crng init done Jan 14 00:54:34.755465 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 00:54:34.755481 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:54:34.755494 kernel: Fallback order for Node 0: 0 Jan 14 00:54:34.755508 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 00:54:34.755521 kernel: Policy zone: DMA32 Jan 14 00:54:34.755793 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:54:34.755807 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 00:54:34.755817 kernel: ftrace: allocating 40097 entries in 157 pages Jan 14 00:54:34.755827 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 00:54:34.755838 kernel: Dynamic Preempt: voluntary Jan 14 00:54:34.755850 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:54:34.755863 kernel: rcu: RCU event tracing is enabled. Jan 14 00:54:34.756006 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 00:54:34.756018 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:54:34.756149 kernel: Rude variant of Tasks RCU enabled. Jan 14 00:54:34.756164 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:54:34.756175 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:54:34.756187 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 00:54:34.756198 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:34.756212 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:34.756959 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:34.756972 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 00:54:34.756983 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:54:34.758172 kernel: Console: colour VGA+ 80x25 Jan 14 00:54:34.758543 kernel: printk: legacy console [ttyS0] enabled Jan 14 00:54:34.758557 kernel: ACPI: Core revision 20240827 Jan 14 00:54:34.758569 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 00:54:34.758579 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 00:54:34.758593 kernel: x2apic enabled Jan 14 00:54:34.758858 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 00:54:34.758991 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 00:54:34.759006 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 00:54:34.759016 kernel: kvm-guest: setup PV IPIs Jan 14 00:54:34.759160 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 00:54:34.759175 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 00:54:34.759189 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 00:54:34.759200 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 00:54:34.759211 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 00:54:34.759223 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 00:54:34.759468 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 00:54:34.759617 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 00:54:34.759763 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 00:54:34.759778 kernel: Speculative Store Bypass: Vulnerable Jan 14 00:54:34.759789 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 00:54:34.759801 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 00:54:34.759814 kernel: active return thunk: srso_alias_return_thunk Jan 14 00:54:34.759961 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 00:54:34.759976 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 00:54:34.759989 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 00:54:34.760000 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 00:54:34.760012 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 00:54:34.760024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 00:54:34.760036 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 00:54:34.760171 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 00:54:34.760180 kernel: Freeing SMP alternatives memory: 32K Jan 14 00:54:34.760188 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:54:34.760196 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:54:34.760203 kernel: landlock: Up and running. Jan 14 00:54:34.760212 kernel: SELinux: Initializing. Jan 14 00:54:34.760219 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:54:34.760227 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 00:54:34.760603 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 00:54:34.760618 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 00:54:34.760749 kernel: signal: max sigframe size: 1776 Jan 14 00:54:34.760759 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:54:34.760768 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:54:34.760776 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:54:34.760783 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 00:54:34.760917 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:54:34.760929 kernel: smpboot: x86: Booting SMP configuration: Jan 14 00:54:34.760942 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 00:54:34.760955 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 00:54:34.760966 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 00:54:34.760978 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 120520K reserved, 0K cma-reserved) Jan 14 00:54:34.760991 kernel: devtmpfs: initialized Jan 14 00:54:34.761132 kernel: x86/mm: Memory block size: 128MB Jan 14 00:54:34.761146 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:54:34.761158 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 00:54:34.761171 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:54:34.761185 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:54:34.761195 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:54:34.761206 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:54:34.761595 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 00:54:34.761607 kernel: audit: type=2000 audit(1768352041.245:1): state=initialized audit_enabled=0 res=1 Jan 14 00:54:34.761618 kernel: cpuidle: using governor menu Jan 14 00:54:34.761757 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:54:34.761772 kernel: dca service started, version 1.12.1 Jan 14 00:54:34.761784 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 00:54:34.761796 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 00:54:34.761946 kernel: PCI: Using configuration type 1 for base access Jan 14 00:54:34.761958 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 00:54:34.761969 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:54:34.761983 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:54:34.761995 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:54:34.762005 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:54:34.762018 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:54:34.762167 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:54:34.762179 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:54:34.762190 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 00:54:34.762202 kernel: ACPI: Interpreter enabled Jan 14 00:54:34.762217 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 00:54:34.762229 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 00:54:34.762483 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 00:54:34.762623 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 00:54:34.762761 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 00:54:34.762773 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 00:54:34.763507 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 00:54:34.763918 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 00:54:34.764182 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 00:54:34.764576 kernel: PCI host bridge to bus 0000:00 Jan 14 00:54:34.771230 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 00:54:34.772622 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 00:54:34.772968 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 00:54:34.773197 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 00:54:34.773891 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 00:54:34.774529 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 00:54:34.774892 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 00:54:34.775156 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 00:54:34.775910 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 00:54:34.776557 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 00:54:34.777059 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 00:54:34.777545 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 00:54:34.777925 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 00:54:34.778157 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 22460 usecs Jan 14 00:54:34.778782 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 00:54:34.779160 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 00:54:34.779616 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 00:54:34.779992 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 00:54:34.780473 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 00:54:34.780847 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 00:54:34.781217 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 00:54:34.781841 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 00:54:34.782091 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 00:54:34.782593 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 00:54:34.782972 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 00:54:34.783209 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 00:54:34.783818 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 00:54:34.784190 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 00:54:34.784779 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 00:54:34.785017 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 21484 usecs Jan 14 00:54:34.785482 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 00:54:34.785852 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 00:54:34.786091 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 00:54:34.786860 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 00:54:34.787100 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 00:54:34.787120 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 00:54:34.787132 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 00:54:34.787143 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 00:54:34.787154 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 00:54:34.787528 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 00:54:34.787540 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 00:54:34.787551 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 00:54:34.787561 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 00:54:34.787572 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 00:54:34.787585 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 00:54:34.787599 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 00:54:34.787847 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 00:54:34.787859 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 00:54:34.787870 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 00:54:34.787881 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 00:54:34.787892 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 00:54:34.787905 kernel: iommu: Default domain type: Translated Jan 14 00:54:34.787920 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 00:54:34.788050 kernel: PCI: Using ACPI for IRQ routing Jan 14 00:54:34.788061 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 00:54:34.788073 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 00:54:34.788083 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 00:54:34.788559 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 00:54:34.788940 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 00:54:34.789174 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 00:54:34.789535 kernel: vgaarb: loaded Jan 14 00:54:34.789549 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 00:54:34.789560 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 00:54:34.789571 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 00:54:34.789581 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:54:34.789593 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:54:34.789605 kernel: pnp: PnP ACPI init Jan 14 00:54:34.790108 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 00:54:34.790127 kernel: pnp: PnP ACPI: found 6 devices Jan 14 00:54:34.790142 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 00:54:34.790157 kernel: NET: Registered PF_INET protocol family Jan 14 00:54:34.790168 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 00:54:34.790179 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 00:54:34.790190 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:54:34.790577 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:54:34.790589 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 00:54:34.790601 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 00:54:34.790611 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:54:34.790622 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 00:54:34.790766 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:54:34.790778 kernel: NET: Registered PF_XDP protocol family Jan 14 00:54:34.791135 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 00:54:34.791600 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 00:54:34.791958 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 00:54:34.792178 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 00:54:34.792624 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 00:54:34.792976 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 00:54:34.793135 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:54:34.793149 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 00:54:34.793163 kernel: Initialise system trusted keyrings Jan 14 00:54:34.793176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 00:54:34.793189 kernel: Key type asymmetric registered Jan 14 00:54:34.793201 kernel: Asymmetric key parser 'x509' registered Jan 14 00:54:34.793214 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 00:54:34.793488 kernel: io scheduler mq-deadline registered Jan 14 00:54:34.793502 kernel: io scheduler kyber registered Jan 14 00:54:34.793517 kernel: io scheduler bfq registered Jan 14 00:54:34.793529 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 00:54:34.793541 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 00:54:34.793552 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 00:54:34.793562 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 00:54:34.793573 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:54:34.793836 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 00:54:34.793850 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 00:54:34.793863 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 00:54:34.793876 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 00:54:34.794135 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 00:54:34.794154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 00:54:34.794899 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 00:54:34.795128 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T00:54:17 UTC (1768352057) Jan 14 00:54:34.795604 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 00:54:34.795625 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 00:54:34.795775 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:54:34.795788 kernel: Segment Routing with IPv6 Jan 14 00:54:34.795801 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:54:34.795950 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:54:34.795963 kernel: Key type dns_resolver registered Jan 14 00:54:34.795975 kernel: IPI shorthand broadcast: enabled Jan 14 00:54:34.795988 kernel: sched_clock: Marking stable (13984252818, 1653064154)->(17781680134, -2144363162) Jan 14 00:54:34.796001 kernel: registered taskstats version 1 Jan 14 00:54:34.796013 kernel: Loading compiled-in X.509 certificates Jan 14 00:54:34.796025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 58a78462583b088d099087e6f2d97e37d80e06bb' Jan 14 00:54:34.796168 kernel: Demotion targets for Node 0: null Jan 14 00:54:34.796181 kernel: Key type .fscrypt registered Jan 14 00:54:34.796193 kernel: Key type fscrypt-provisioning registered Jan 14 00:54:34.796206 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:54:34.796218 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:54:34.796231 kernel: ima: No architecture policies found Jan 14 00:54:34.796485 kernel: clk: Disabling unused clocks Jan 14 00:54:34.796752 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 00:54:34.796765 kernel: Write protecting the kernel read-only data: 47104k Jan 14 00:54:34.796776 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 14 00:54:34.796787 kernel: Run /init as init process Jan 14 00:54:34.796798 kernel: with arguments: Jan 14 00:54:34.796808 kernel: /init Jan 14 00:54:34.796821 kernel: with environment: Jan 14 00:54:34.796947 kernel: HOME=/ Jan 14 00:54:34.796959 kernel: TERM=linux Jan 14 00:54:34.796970 kernel: SCSI subsystem initialized Jan 14 00:54:34.796981 kernel: libata version 3.00 loaded. Jan 14 00:54:34.797224 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 00:54:34.797459 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 00:54:34.797829 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 00:54:34.798215 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 00:54:34.798845 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 00:54:34.799117 kernel: scsi host0: ahci Jan 14 00:54:34.799759 kernel: scsi host1: ahci Jan 14 00:54:34.801567 kernel: scsi host2: ahci Jan 14 00:54:34.802095 kernel: scsi host3: ahci Jan 14 00:54:34.802818 kernel: scsi host4: ahci Jan 14 00:54:34.803084 kernel: scsi host5: ahci Jan 14 00:54:34.803104 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 00:54:34.803117 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 00:54:34.803129 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 00:54:34.803502 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 00:54:34.803515 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 00:54:34.803527 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 00:54:34.803538 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:34.803549 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:34.803561 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:34.803575 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 00:54:34.803817 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 00:54:34.803828 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 00:54:34.803840 kernel: ata3.00: applying bridge limits Jan 14 00:54:34.803852 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:34.803866 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 00:54:34.803993 kernel: ata3.00: configured for UDMA/100 Jan 14 00:54:34.804008 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 00:54:34.804781 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 00:54:34.805045 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 00:54:34.805562 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 00:54:34.805948 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 00:54:34.805968 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 00:54:34.805980 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 00:54:34.806775 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 00:54:34.806794 kernel: GPT:16515071 != 27000831 Jan 14 00:54:34.806807 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 00:54:34.806820 kernel: GPT:16515071 != 27000831 Jan 14 00:54:34.806835 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 00:54:34.806847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 00:54:34.806859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:54:34.807006 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:54:34.807018 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:54:34.807033 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 00:54:34.807045 kernel: raid6: avx2x4 gen() 15284 MB/s Jan 14 00:54:34.807056 kernel: raid6: avx2x2 gen() 12395 MB/s Jan 14 00:54:34.807067 kernel: raid6: avx2x1 gen() 11488 MB/s Jan 14 00:54:34.807078 kernel: raid6: using algorithm avx2x4 gen() 15284 MB/s Jan 14 00:54:34.807224 kernel: raid6: .... xor() 4285 MB/s, rmw enabled Jan 14 00:54:34.807464 kernel: raid6: using avx2x2 recovery algorithm Jan 14 00:54:34.807484 kernel: xor: automatically using best checksumming function avx Jan 14 00:54:34.807761 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:54:34.807778 kernel: BTRFS: device fsid 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac devid 1 transid 33 /dev/mapper/usr (253:0) scanned by mount (182) Jan 14 00:54:34.807920 kernel: BTRFS info (device dm-0): first mount of filesystem 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac Jan 14 00:54:34.807934 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:34.807947 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:54:34.807960 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:54:34.807974 kernel: loop: module loaded Jan 14 00:54:34.807987 kernel: loop0: detected capacity change from 0 to 100552 Jan 14 00:54:34.808121 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:54:34.808136 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1126859266 wd_nsec: 1126859555 Jan 14 00:54:34.808151 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:54:34.808168 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:54:34.808182 systemd[1]: Detected virtualization kvm. Jan 14 00:54:34.808196 systemd[1]: Detected architecture x86-64. Jan 14 00:54:34.808567 systemd[1]: Running in initrd. Jan 14 00:54:34.808583 systemd[1]: No hostname configured, using default hostname. Jan 14 00:54:34.808597 systemd[1]: Hostname set to . Jan 14 00:54:34.808609 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:54:34.808622 kernel: hrtimer: interrupt took 3258123 ns Jan 14 00:54:34.808779 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:54:34.808792 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:54:34.808971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:54:34.808988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:54:34.809004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:54:34.809017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:54:34.809029 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:54:34.809175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:54:34.809193 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:54:34.809206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:54:34.809218 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:54:34.809229 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:54:34.809508 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:54:34.809527 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:54:34.809805 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:54:34.809820 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:54:34.809832 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:54:34.809844 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:54:34.809856 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:54:34.809871 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:54:34.809886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:54:34.810039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:54:34.810053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:54:34.810065 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:54:34.810077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:54:34.810089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:54:34.810103 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:54:34.810118 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:54:34.810496 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:54:34.810510 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:54:34.810522 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:54:34.810534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:54:34.810789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:34.810804 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:54:34.810816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:54:34.810829 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:54:34.811019 systemd-journald[317]: Collecting audit messages is enabled. Jan 14 00:54:34.811184 kernel: audit: type=1130 audit(1768352074.759:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:34.811199 systemd-journald[317]: Journal started Jan 14 00:54:34.811504 systemd-journald[317]: Runtime Journal (/run/log/journal/21de0f8f8ebc4c6f9e8e9c448d5ad24c) is 6M, max 48.2M, 42.1M free. Jan 14 00:54:34.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:34.870503 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:54:34.925998 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:54:34.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:35.030924 kernel: audit: type=1130 audit(1768352074.969:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:35.781095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:54:35.911080 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:54:35.929614 kernel: Bridge firewalling registered Jan 14 00:54:35.931777 systemd-modules-load[320]: Inserted module 'br_netfilter' Jan 14 00:54:35.952493 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:54:37.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:37.897634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:54:38.062636 kernel: audit: type=1130 audit(1768352077.892:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.066796 kernel: audit: type=1130 audit(1768352077.983:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:37.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.047494 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:54:38.058973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:38.199822 kernel: audit: type=1130 audit(1768352078.116:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.202046 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:54:38.321866 kernel: audit: type=1130 audit(1768352078.226:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.317639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:54:38.336170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:54:38.419853 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:54:38.629105 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:54:38.728780 kernel: audit: type=1130 audit(1768352078.654:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.730171 kernel: audit: type=1334 audit(1768352078.664:9): prog-id=6 op=LOAD Jan 14 00:54:38.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.664000 audit: BPF prog-id=6 op=LOAD Jan 14 00:54:38.668802 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:54:38.794134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:54:38.924176 kernel: audit: type=1130 audit(1768352078.836:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:38.924924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:54:38.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:39.048128 kernel: audit: type=1130 audit(1768352078.991:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:39.052588 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:54:39.500166 dracut-cmdline[359]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:54:39.579059 systemd-resolved[355]: Positive Trust Anchors: Jan 14 00:54:39.581788 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:54:39.581830 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:54:39.581879 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:54:40.086973 systemd-resolved[355]: Defaulting to hostname 'linux'. Jan 14 00:54:40.115206 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:54:40.208813 kernel: audit: type=1130 audit(1768352080.149:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:40.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:40.149929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:54:41.880555 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:54:42.110014 kernel: iscsi: registered transport (tcp) Jan 14 00:54:42.328879 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:54:42.329482 kernel: QLogic iSCSI HBA Driver Jan 14 00:54:42.995508 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:54:43.302184 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:54:43.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:43.418830 kernel: audit: type=1130 audit(1768352083.384:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:43.426124 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:54:44.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.164938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:54:44.350486 kernel: audit: type=1130 audit(1768352084.193:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.203226 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:54:44.389851 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:54:44.755177 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:54:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.817566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:54:45.000589 kernel: audit: type=1130 audit(1768352084.797:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:45.000636 kernel: audit: type=1334 audit(1768352084.803:16): prog-id=7 op=LOAD Jan 14 00:54:45.000658 kernel: audit: type=1334 audit(1768352084.803:17): prog-id=8 op=LOAD Jan 14 00:54:44.803000 audit: BPF prog-id=7 op=LOAD Jan 14 00:54:44.803000 audit: BPF prog-id=8 op=LOAD Jan 14 00:54:45.247929 systemd-udevd[578]: Using default interface naming scheme 'v257'. Jan 14 00:54:45.373621 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:54:45.511058 kernel: audit: type=1130 audit(1768352085.384:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:45.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:45.401963 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:54:45.818899 dracut-pre-trigger[624]: rd.md=0: removing MD RAID activation Jan 14 00:54:46.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.423223 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:54:46.538543 kernel: audit: type=1130 audit(1768352086.443:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.468551 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:54:46.726838 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:54:46.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.852552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:54:46.948922 kernel: audit: type=1130 audit(1768352086.815:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.948964 kernel: audit: type=1334 audit(1768352086.847:21): prog-id=9 op=LOAD Jan 14 00:54:46.847000 audit: BPF prog-id=9 op=LOAD Jan 14 00:54:47.072601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:54:47.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:47.164834 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:54:47.201538 kernel: audit: type=1130 audit(1768352087.132:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:47.288977 systemd-networkd[728]: lo: Link UP Jan 14 00:54:47.303144 systemd-networkd[728]: lo: Gained carrier Jan 14 00:54:47.333974 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:54:47.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:47.387106 systemd[1]: Reached target network.target - Network. Jan 14 00:54:47.476889 kernel: audit: type=1130 audit(1768352087.386:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:47.626962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 00:54:47.795491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 00:54:47.849037 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:54:47.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:48.013116 kernel: audit: type=1130 audit(1768352087.969:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:48.077199 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 00:54:48.250057 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 00:54:48.428119 kernel: AES CTR mode by8 optimization enabled Jan 14 00:54:48.465092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 00:54:48.547498 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 00:54:48.594147 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:54:48.609504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:54:48.609561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:54:48.654855 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:54:48.696008 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:48.696016 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:54:48.697656 systemd-networkd[728]: eth0: Link UP Jan 14 00:54:48.703968 systemd-networkd[728]: eth0: Gained carrier Jan 14 00:54:48.703993 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:48.893893 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:54:49.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:48.957011 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 00:54:49.365622 kernel: audit: type=1131 audit(1768352089.283:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:49.149070 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:54:49.200126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:49.285849 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:49.656680 disk-uuid[838]: Primary Header is updated. Jan 14 00:54:49.656680 disk-uuid[838]: Secondary Entries is updated. Jan 14 00:54:49.656680 disk-uuid[838]: Secondary Header is updated. Jan 14 00:54:49.698603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:50.425844 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:54:52.515620 disk-uuid[840]: Warning: The kernel is still using the old partition table. Jan 14 00:54:52.515620 disk-uuid[840]: The new table will be used at the next reboot or after you Jan 14 00:54:52.515620 disk-uuid[840]: run partprobe(8) or kpartx(8) Jan 14 00:54:52.515620 disk-uuid[840]: The operation has completed successfully. Jan 14 00:54:52.967833 kernel: audit: type=1130 audit(1768352092.531:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.967874 kernel: audit: type=1130 audit(1768352092.617:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.967892 kernel: audit: type=1131 audit(1768352092.617:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.971048 kernel: audit: type=1130 audit(1768352092.824:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:52.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:50.689563 systemd-networkd[728]: eth0: Gained IPv6LL Jan 14 00:54:52.533221 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:54:52.535519 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:54:52.815496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:52.935678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:54:53.492657 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 14 00:54:53.525825 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:53.525907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:53.617681 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:53.617910 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:53.698881 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:53.754057 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:54:53.831700 kernel: audit: type=1130 audit(1768352093.771:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:53.843892 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:54:54.399222 ignition[879]: Ignition 2.24.0 Jan 14 00:54:54.399705 ignition[879]: Stage: fetch-offline Jan 14 00:54:54.399898 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:54.399920 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:54.400045 ignition[879]: parsed url from cmdline: "" Jan 14 00:54:54.400051 ignition[879]: no config URL provided Jan 14 00:54:54.400059 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:54:54.400075 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:54:54.400135 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 14 00:54:54.400144 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 00:54:54.549071 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 14 00:54:55.195474 ignition[879]: parsing config with SHA512: 8a13303ee275cb9fccd4ab3c09ebe6c642964e3c04d1c8309908840a003231caebbbc9ba440372ef2d3e4a68a66d08017a3fddc151d3716f1eb0927052a49815 Jan 14 00:54:55.272567 unknown[879]: fetched base config from "system" Jan 14 00:54:55.272584 unknown[879]: fetched user config from "qemu" Jan 14 00:54:55.276544 ignition[879]: fetch-offline: fetch-offline passed Jan 14 00:54:55.276633 ignition[879]: Ignition finished successfully Jan 14 00:54:55.341664 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:54:55.476159 kernel: audit: type=1130 audit(1768352095.353:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.359582 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 00:54:55.371235 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:54:55.766981 ignition[889]: Ignition 2.24.0 Jan 14 00:54:55.767114 ignition[889]: Stage: kargs Jan 14 00:54:55.855137 kernel: audit: type=1130 audit(1768352095.789:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.787927 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:54:55.767534 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:55.798075 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:54:55.767549 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:55.769123 ignition[889]: kargs: kargs passed Jan 14 00:54:55.769185 ignition[889]: Ignition finished successfully Jan 14 00:54:56.088924 ignition[897]: Ignition 2.24.0 Jan 14 00:54:56.100592 ignition[897]: Stage: disks Jan 14 00:54:56.101982 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:56.102000 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:54:56.122232 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:54:56.277451 kernel: audit: type=1130 audit(1768352096.181:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.108975 ignition[897]: disks: disks passed Jan 14 00:54:56.183027 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:54:56.109511 ignition[897]: Ignition finished successfully Jan 14 00:54:56.215480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:54:56.294889 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:54:56.328604 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:54:56.349022 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:54:56.352554 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:54:56.660209 systemd-fsck[907]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 00:54:56.687619 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:54:56.804896 kernel: audit: type=1130 audit(1768352096.711:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.720079 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:54:57.989490 kernel: EXT4-fs (vda9): mounted filesystem 6efdc615-0e3c-4caf-8d0b-1f38e5c59ef0 r/w with ordered data mode. Quota mode: none. Jan 14 00:54:57.997622 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:54:58.015085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:54:58.068558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:54:58.089689 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:54:58.111533 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 00:54:58.111611 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:54:58.111654 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:54:58.263668 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:54:58.345887 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Jan 14 00:54:58.352711 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:54:58.425558 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:54:58.425594 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:54:58.504988 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:58.505215 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:58.517636 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:54:59.975958 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:55:00.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:00.027573 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:55:00.102592 kernel: audit: type=1130 audit(1768352100.016:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:00.078933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:55:00.224100 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:55:00.269506 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:55:00.383187 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:55:00.565038 kernel: audit: type=1130 audit(1768352100.407:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:00.565077 kernel: audit: type=1130 audit(1768352100.490:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:00.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:00.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:00.566010 ignition[1015]: INFO : Ignition 2.24.0 Jan 14 00:55:00.566010 ignition[1015]: INFO : Stage: mount Jan 14 00:55:00.566010 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:55:00.566010 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:55:00.566010 ignition[1015]: INFO : mount: mount passed Jan 14 00:55:00.566010 ignition[1015]: INFO : Ignition finished successfully Jan 14 00:55:00.449856 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:55:00.514983 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:55:00.679724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:55:00.868653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 14 00:55:00.906023 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:55:00.906111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:55:01.014073 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:55:01.017223 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:55:01.020067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:55:01.293863 ignition[1041]: INFO : Ignition 2.24.0 Jan 14 00:55:01.293863 ignition[1041]: INFO : Stage: files Jan 14 00:55:01.322110 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:55:01.322110 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:55:01.322110 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:55:01.322110 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:55:01.322110 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:55:01.434214 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:55:01.434214 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:55:01.434214 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:55:01.434214 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 00:55:01.434214 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 00:55:01.364229 unknown[1041]: wrote ssh authorized keys file for user: core Jan 14 00:55:01.831142 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:55:02.147871 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:55:02.182735 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:55:02.558928 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:55:02.558928 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:55:02.558928 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 00:55:03.568561 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:55:07.307622 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3846047935 wd_nsec: 3846047756 Jan 14 00:55:10.185565 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:55:10.185565 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:55:10.321552 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 00:55:10.382550 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 00:55:10.774620 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 00:55:10.823520 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 00:55:10.823520 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 00:55:10.823520 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:55:10.823520 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:55:11.059196 kernel: audit: type=1130 audit(1768352110.916:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:10.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.059727 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:55:11.059727 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:55:11.059727 ignition[1041]: INFO : files: files passed Jan 14 00:55:11.059727 ignition[1041]: INFO : Ignition finished successfully Jan 14 00:55:10.866639 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:55:10.952766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:55:10.996903 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:55:11.283183 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:55:11.286698 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:55:11.510193 kernel: audit: type=1130 audit(1768352111.327:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.510232 kernel: audit: type=1131 audit(1768352111.327:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.510733 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 00:55:11.556921 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:55:11.608729 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:55:11.608729 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:55:11.690121 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:55:11.787703 kernel: audit: type=1130 audit(1768352111.715:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:11.786698 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:55:11.835998 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:55:12.580755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:55:12.869772 kernel: audit: type=1130 audit(1768352112.598:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:12.882094 kernel: audit: type=1131 audit(1768352112.598:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:12.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:12.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:12.583193 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:55:12.667225 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:55:12.783986 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:55:12.927552 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:55:12.936600 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:55:13.680027 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:55:13.885591 kernel: audit: type=1130 audit(1768352113.712:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:13.886073 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:55:14.555600 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:55:14.560773 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:55:14.586744 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:55:14.803609 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:55:15.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:15.203914 kernel: audit: type=1131 audit(1768352115.006:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:14.872178 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:55:14.894017 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:55:15.176165 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:55:15.281534 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:55:15.331098 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:55:15.499958 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:55:15.621086 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:55:15.889729 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:55:16.098645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:55:16.484235 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:55:16.784676 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:55:16.832680 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:55:17.209657 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:55:17.394565 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:55:17.470432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:55:17.673942 kernel: audit: type=1131 audit(1768352117.508:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:17.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:17.680701 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:55:17.705607 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:55:17.816969 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:55:18.229730 kernel: audit: type=1131 audit(1768352118.169:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:18.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:17.825720 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:55:18.081498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:55:18.099972 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:55:18.233534 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:55:18.262784 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:55:18.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:18.506046 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:55:18.624765 kernel: audit: type=1131 audit(1768352118.503:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:18.594685 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:55:18.611488 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:55:18.666193 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:55:18.771061 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:55:18.876656 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:55:18.880109 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:55:18.895180 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:55:19.228017 kernel: audit: type=1131 audit(1768352119.092:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:18.909484 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:55:19.403987 kernel: audit: type=1131 audit(1768352119.234:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:18.919588 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:55:18.919702 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:55:19.784156 kernel: audit: type=1131 audit(1768352119.607:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.027718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:55:19.031593 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:55:20.145626 kernel: audit: type=1131 audit(1768352119.933:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.096109 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:55:19.096776 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:55:20.331524 kernel: audit: type=1131 audit(1768352120.179:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:20.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.301603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:55:20.491622 kernel: audit: type=1131 audit(1768352120.381:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:20.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.490144 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:55:19.498119 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:55:19.703685 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:55:19.879798 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:55:19.889213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:55:20.918508 kernel: audit: type=1130 audit(1768352120.788:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:20.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:20.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:19.941110 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:55:19.942031 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:55:20.182779 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:55:20.183192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:55:20.619163 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:55:20.682037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:55:21.125130 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:55:22.369510 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:55:22.378511 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:55:22.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.618013 ignition[1098]: INFO : Ignition 2.24.0 Jan 14 00:55:22.618013 ignition[1098]: INFO : Stage: umount Jan 14 00:55:22.674081 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:55:22.674081 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 00:55:22.674081 ignition[1098]: INFO : umount: umount passed Jan 14 00:55:22.674081 ignition[1098]: INFO : Ignition finished successfully Jan 14 00:55:22.892103 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 14 00:55:22.892144 kernel: audit: type=1131 audit(1768352122.685:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.892163 kernel: audit: type=1131 audit(1768352122.829:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.649673 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:55:23.137105 kernel: audit: type=1131 audit(1768352122.942:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.137156 kernel: audit: type=1131 audit(1768352123.046:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.651977 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:55:22.691110 systemd[1]: Stopped target network.target - Network. Jan 14 00:55:23.284784 kernel: audit: type=1131 audit(1768352123.183:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.780783 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:55:23.448700 kernel: audit: type=1131 audit(1768352123.303:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:22.786040 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:55:22.833109 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:55:22.833509 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:55:22.944017 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:55:22.944463 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:55:23.688625 kernel: audit: type=1131 audit(1768352123.607:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.047971 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:55:23.048693 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:55:23.184790 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:55:23.186647 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:55:23.307805 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:55:23.689000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:55:23.433524 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:55:23.523618 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:55:23.527763 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:55:23.704040 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:55:23.706754 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:55:23.828712 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:55:24.018077 kernel: audit: type=1334 audit(1768352123.689:65): prog-id=9 op=UNLOAD Jan 14 00:55:24.018159 kernel: audit: type=1131 audit(1768352123.789:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:23.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.011192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:55:24.014998 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:55:24.099742 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:55:24.121117 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:55:24.121542 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:55:24.345533 kernel: audit: type=1334 audit(1768352124.168:67): prog-id=6 op=UNLOAD Jan 14 00:55:24.168000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:55:24.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.189686 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:55:24.209052 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:55:24.287533 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:55:24.288023 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:55:24.314711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:55:24.475739 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:55:24.476120 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:55:24.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.622762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:55:24.623130 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:55:24.693533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:55:24.693626 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:55:24.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.715794 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:55:24.717194 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:55:24.780786 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:55:24.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.781032 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:55:24.831626 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:55:24.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.831740 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:55:25.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.929788 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:55:24.976161 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:55:24.976522 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:55:24.995630 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:55:25.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:24.995736 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:55:25.034706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:55:25.035163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:55:25.158079 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:55:25.158583 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:55:25.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:25.396632 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:55:25.398058 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:55:25.478594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:55:25.593527 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:55:25.842798 systemd[1]: Switching root. Jan 14 00:55:26.092632 systemd-journald[317]: Journal stopped Jan 14 00:55:34.569455 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Jan 14 00:55:34.569651 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:55:34.569673 kernel: SELinux: policy capability open_perms=1 Jan 14 00:55:34.569694 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:55:34.569722 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:55:34.569801 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:55:34.570991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:55:34.571017 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:55:34.571034 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:55:34.571050 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:55:34.571068 systemd[1]: Successfully loaded SELinux policy in 482.104ms. Jan 14 00:55:34.571148 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 79.658ms. Jan 14 00:55:34.571176 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:55:34.571196 systemd[1]: Detected virtualization kvm. Jan 14 00:55:34.571394 systemd[1]: Detected architecture x86-64. Jan 14 00:55:34.571421 systemd[1]: Detected first boot. Jan 14 00:55:34.571513 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:55:34.571536 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 14 00:55:34.571557 kernel: audit: type=1334 audit(1768352128.100:83): prog-id=10 op=LOAD Jan 14 00:55:34.571574 kernel: audit: type=1334 audit(1768352128.101:84): prog-id=10 op=UNLOAD Jan 14 00:55:34.571590 kernel: audit: type=1334 audit(1768352128.101:85): prog-id=11 op=LOAD Jan 14 00:55:34.571655 kernel: audit: type=1334 audit(1768352128.101:86): prog-id=11 op=UNLOAD Jan 14 00:55:34.571684 zram_generator::config[1143]: No configuration found. Jan 14 00:55:34.571746 kernel: Guest personality initialized and is inactive Jan 14 00:55:34.571769 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 00:55:34.571787 kernel: Initialized host personality Jan 14 00:55:34.571804 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:55:34.571928 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:55:34.571950 kernel: audit: type=1334 audit(1768352131.890:87): prog-id=12 op=LOAD Jan 14 00:55:34.571967 kernel: audit: type=1334 audit(1768352131.892:88): prog-id=3 op=UNLOAD Jan 14 00:55:34.571983 kernel: audit: type=1334 audit(1768352131.892:89): prog-id=13 op=LOAD Jan 14 00:55:34.572000 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:55:34.572019 kernel: audit: type=1334 audit(1768352131.893:90): prog-id=14 op=LOAD Jan 14 00:55:34.572036 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:55:34.574084 kernel: audit: type=1334 audit(1768352131.894:91): prog-id=4 op=UNLOAD Jan 14 00:55:34.574111 kernel: audit: type=1334 audit(1768352131.894:92): prog-id=5 op=UNLOAD Jan 14 00:55:34.574133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:55:34.574226 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:55:34.574348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:55:34.574378 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:55:34.574400 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:55:34.574487 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:55:34.574560 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:55:34.574580 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:55:34.574600 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:55:34.574621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:55:34.574643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:55:34.574780 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:55:34.574808 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:55:34.574827 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:55:34.574847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:55:34.574867 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 00:55:34.574951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:55:34.574974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:55:34.575060 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:55:34.575086 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:55:34.575111 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:55:34.575132 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:55:34.575155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:55:34.575177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:55:34.575203 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:55:34.575351 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:55:34.575379 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:55:34.575401 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:55:34.575420 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:55:34.575437 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:55:34.575457 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:55:34.575479 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:55:34.575560 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:55:34.575588 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:55:34.575611 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:55:34.575634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:55:34.575656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:55:34.575676 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:55:34.575693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:55:34.575710 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:55:34.575786 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:55:34.575805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:34.575821 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:55:34.575838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:55:34.575855 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:55:34.575937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:55:34.576006 systemd[1]: Reached target machines.target - Containers. Jan 14 00:55:34.576024 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:55:34.576041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:55:34.576058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:55:34.576075 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:55:34.576091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:55:34.576108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:55:34.576319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:55:34.576388 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:55:34.576406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:55:34.576424 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:55:34.576489 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:55:34.576508 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:55:34.576525 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:55:34.576541 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 00:55:34.576596 kernel: audit: type=1131 audit(1768352133.677:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.576613 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:55:34.576737 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:55:34.576768 kernel: audit: type=1131 audit(1768352133.717:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.576791 kernel: audit: type=1334 audit(1768352133.735:100): prog-id=14 op=UNLOAD Jan 14 00:55:34.579010 kernel: audit: type=1334 audit(1768352133.735:101): prog-id=13 op=UNLOAD Jan 14 00:55:34.579042 kernel: audit: type=1334 audit(1768352133.755:102): prog-id=15 op=LOAD Jan 14 00:55:34.579064 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:55:34.579086 kernel: audit: type=1334 audit(1768352133.768:103): prog-id=16 op=LOAD Jan 14 00:55:34.579106 kernel: audit: type=1334 audit(1768352133.775:104): prog-id=17 op=LOAD Jan 14 00:55:34.579124 kernel: ACPI: bus type drm_connector registered Jan 14 00:55:34.579144 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:55:34.579164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:55:34.579335 kernel: fuse: init (API version 7.41) Jan 14 00:55:34.579357 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:55:34.579421 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:55:34.579442 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:55:34.579460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:34.579478 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:55:34.579496 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:55:34.579579 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:55:34.579599 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:55:34.579671 systemd-journald[1225]: Collecting audit messages is enabled. Jan 14 00:55:34.579868 kernel: audit: type=1305 audit(1768352134.561:105): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:55:34.579954 kernel: audit: type=1300 audit(1768352134.561:105): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc2fa8cbc0 a2=4000 a3=0 items=0 ppid=1 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:34.579980 systemd-journald[1225]: Journal started Jan 14 00:55:34.580025 systemd-journald[1225]: Runtime Journal (/run/log/journal/21de0f8f8ebc4c6f9e8e9c448d5ad24c) is 6M, max 48.2M, 42.1M free. Jan 14 00:55:32.770000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 00:55:33.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:33.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:33.735000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:55:33.735000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:55:33.755000 audit: BPF prog-id=15 op=LOAD Jan 14 00:55:33.768000 audit: BPF prog-id=16 op=LOAD Jan 14 00:55:33.775000 audit: BPF prog-id=17 op=LOAD Jan 14 00:55:34.561000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:55:34.561000 audit[1225]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc2fa8cbc0 a2=4000 a3=0 items=0 ppid=1 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:31.823015 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:55:31.902192 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 00:55:31.904164 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:55:31.907454 systemd[1]: systemd-journald.service: Consumed 8.125s CPU time. Jan 14 00:55:34.561000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:55:34.630390 kernel: audit: type=1327 audit(1768352134.561:105): proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:55:34.668529 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:55:34.678176 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:55:34.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.686454 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:55:34.703481 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:55:34.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.718060 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:55:34.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.736825 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:55:34.738025 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:55:34.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.816007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:55:34.816454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:55:34.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.835036 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:55:34.836153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:55:34.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.881492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:55:34.882936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:55:34.907772 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:55:34.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.908840 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:55:34.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.920218 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:55:34.920736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:55:34.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.931840 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:55:34.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.957058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:55:34.974960 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:55:34.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:34.999130 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:55:35.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.109235 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:55:35.127858 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:55:35.187452 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:55:35.212592 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:55:35.222854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:55:35.225194 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:55:35.237017 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:55:35.270745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:55:35.272342 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:55:35.280403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:55:35.303813 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:55:35.315762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:55:35.332783 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:55:35.352159 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:55:35.367174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:55:35.386628 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:55:35.403753 systemd-journald[1225]: Time spent on flushing to /var/log/journal/21de0f8f8ebc4c6f9e8e9c448d5ad24c is 66.677ms for 1161 entries. Jan 14 00:55:35.403753 systemd-journald[1225]: System Journal (/var/log/journal/21de0f8f8ebc4c6f9e8e9c448d5ad24c) is 8M, max 163.5M, 155.5M free. Jan 14 00:55:35.594436 systemd-journald[1225]: Received client request to flush runtime journal. Jan 14 00:55:35.594513 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 00:55:35.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.403826 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:55:35.466670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:55:35.474950 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:55:35.483163 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:55:35.511124 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:55:35.525484 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:55:35.558577 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:55:35.572838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:55:35.602424 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:55:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.689347 kernel: loop2: detected capacity change from 0 to 229808 Jan 14 00:55:35.710711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:55:35.751370 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:55:35.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.771947 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:55:35.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:35.799000 audit: BPF prog-id=18 op=LOAD Jan 14 00:55:35.799000 audit: BPF prog-id=19 op=LOAD Jan 14 00:55:35.799000 audit: BPF prog-id=20 op=LOAD Jan 14 00:55:35.806562 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:55:35.814000 audit: BPF prog-id=21 op=LOAD Jan 14 00:55:35.825609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:55:35.839524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:55:35.864000 audit: BPF prog-id=22 op=LOAD Jan 14 00:55:35.876088 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 00:55:35.864000 audit: BPF prog-id=23 op=LOAD Jan 14 00:55:35.864000 audit: BPF prog-id=24 op=LOAD Jan 14 00:55:35.875602 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:55:35.889000 audit: BPF prog-id=25 op=LOAD Jan 14 00:55:35.889000 audit: BPF prog-id=26 op=LOAD Jan 14 00:55:35.889000 audit: BPF prog-id=27 op=LOAD Jan 14 00:55:35.894117 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:55:35.972196 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 14 00:55:35.973731 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 14 00:55:35.993527 kernel: loop4: detected capacity change from 0 to 111560 Jan 14 00:55:36.016139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:55:36.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.066405 kernel: loop5: detected capacity change from 0 to 229808 Jan 14 00:55:36.095645 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:55:36.104192 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:55:36.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.119355 kernel: loop6: detected capacity change from 0 to 50784 Jan 14 00:55:36.132721 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:55:36.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.172427 (sd-merge)[1291]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 00:55:36.188094 (sd-merge)[1291]: Merged extensions into '/usr'. Jan 14 00:55:36.206148 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:55:36.206171 systemd[1]: Reloading... Jan 14 00:55:36.379376 zram_generator::config[1339]: No configuration found. Jan 14 00:55:36.381952 systemd-resolved[1284]: Positive Trust Anchors: Jan 14 00:55:36.382026 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:55:36.382036 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:55:36.382081 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:55:36.385632 systemd-oomd[1283]: No swap; memory pressure usage will be degraded Jan 14 00:55:36.399163 systemd-resolved[1284]: Defaulting to hostname 'linux'. Jan 14 00:55:36.782053 systemd[1]: Reloading finished in 574 ms. Jan 14 00:55:36.856118 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:55:36.870509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:55:36.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.882526 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:55:36.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.903146 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:55:36.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:36.935737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:55:36.976046 systemd[1]: Starting ensure-sysext.service... Jan 14 00:55:36.991470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:55:37.001000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:55:37.001000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:55:37.003000 audit: BPF prog-id=28 op=LOAD Jan 14 00:55:37.003000 audit: BPF prog-id=29 op=LOAD Jan 14 00:55:37.029554 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:55:37.057000 audit: BPF prog-id=30 op=LOAD Jan 14 00:55:37.057000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:55:37.059000 audit: BPF prog-id=31 op=LOAD Jan 14 00:55:37.059000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:55:37.059000 audit: BPF prog-id=32 op=LOAD Jan 14 00:55:37.059000 audit: BPF prog-id=33 op=LOAD Jan 14 00:55:37.059000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:55:37.059000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:55:37.060000 audit: BPF prog-id=34 op=LOAD Jan 14 00:55:37.060000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:55:37.062000 audit: BPF prog-id=35 op=LOAD Jan 14 00:55:37.062000 audit: BPF prog-id=36 op=LOAD Jan 14 00:55:37.062000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:55:37.062000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:55:37.062000 audit: BPF prog-id=37 op=LOAD Jan 14 00:55:37.062000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:55:37.062000 audit: BPF prog-id=38 op=LOAD Jan 14 00:55:37.062000 audit: BPF prog-id=39 op=LOAD Jan 14 00:55:37.062000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:55:37.062000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:55:37.067000 audit: BPF prog-id=40 op=LOAD Jan 14 00:55:37.067000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:55:37.067000 audit: BPF prog-id=41 op=LOAD Jan 14 00:55:37.067000 audit: BPF prog-id=42 op=LOAD Jan 14 00:55:37.067000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:55:37.067000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:55:37.078710 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:55:37.078789 systemd[1]: Reloading... Jan 14 00:55:37.095228 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:55:37.095412 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:55:37.095825 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:55:37.099414 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 14 00:55:37.099529 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 14 00:55:37.118686 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:55:37.118987 systemd-tmpfiles[1374]: Skipping /boot Jan 14 00:55:37.154520 systemd-udevd[1375]: Using default interface naming scheme 'v257'. Jan 14 00:55:37.173757 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:55:37.173836 systemd-tmpfiles[1374]: Skipping /boot Jan 14 00:55:37.256375 zram_generator::config[1410]: No configuration found. Jan 14 00:55:37.516345 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:55:37.532397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 00:55:37.556122 kernel: ACPI: button: Power Button [PWRF] Jan 14 00:55:37.623528 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 00:55:37.624056 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 00:55:37.718661 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 00:55:37.719096 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 00:55:37.732628 systemd[1]: Reloading finished in 653 ms. Jan 14 00:55:37.749948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:55:37.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:37.764000 audit: BPF prog-id=43 op=LOAD Jan 14 00:55:37.764000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:55:37.765000 audit: BPF prog-id=44 op=LOAD Jan 14 00:55:37.765000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:55:37.765000 audit: BPF prog-id=45 op=LOAD Jan 14 00:55:37.765000 audit: BPF prog-id=46 op=LOAD Jan 14 00:55:37.765000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:55:37.765000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:55:37.767000 audit: BPF prog-id=47 op=LOAD Jan 14 00:55:37.767000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:55:37.767000 audit: BPF prog-id=48 op=LOAD Jan 14 00:55:37.767000 audit: BPF prog-id=49 op=LOAD Jan 14 00:55:37.767000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:55:37.767000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:55:37.769000 audit: BPF prog-id=50 op=LOAD Jan 14 00:55:37.769000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:55:37.769000 audit: BPF prog-id=51 op=LOAD Jan 14 00:55:37.770000 audit: BPF prog-id=52 op=LOAD Jan 14 00:55:37.770000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:55:37.770000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:55:37.778000 audit: BPF prog-id=53 op=LOAD Jan 14 00:55:37.778000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:55:37.778000 audit: BPF prog-id=54 op=LOAD Jan 14 00:55:37.778000 audit: BPF prog-id=55 op=LOAD Jan 14 00:55:37.778000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:55:37.778000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:55:37.779000 audit: BPF prog-id=56 op=LOAD Jan 14 00:55:37.779000 audit: BPF prog-id=57 op=LOAD Jan 14 00:55:37.779000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:55:37.779000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:55:37.788592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:55:37.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:37.858758 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:37.863736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:55:37.878489 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:55:37.899450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:55:37.921527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:55:37.945437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:55:37.975092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:55:37.987713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:55:37.988163 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:55:37.993740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:55:38.021958 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:55:38.041562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:55:38.068682 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:55:38.095000 audit: BPF prog-id=58 op=LOAD Jan 14 00:55:38.108157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:55:38.137794 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:55:38.165523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:38.180147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:55:38.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:38.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:38.180718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:55:38.194778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:55:38.195689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:55:38.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:38.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:38.217959 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:55:38.265102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:55:38.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:38.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:55:38.281000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:55:38.281000 audit[1517]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd81855e30 a2=420 a3=0 items=0 ppid=1487 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:38.281000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:55:38.282204 augenrules[1517]: No rules Jan 14 00:55:38.285706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:55:38.301411 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:55:38.302721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:55:38.313755 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:55:38.385121 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:55:38.413672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:55:38.423859 systemd[1]: Finished ensure-sysext.service. Jan 14 00:55:38.461521 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:38.467725 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:55:38.479078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:55:38.485696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:55:38.500524 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:55:38.510528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:55:38.521437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:55:38.530518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:55:38.530746 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:55:38.530800 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:55:38.582119 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 00:55:38.596882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:55:38.608700 augenrules[1533]: /sbin/augenrules: No change Jan 14 00:55:38.610839 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:55:38.611009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:55:38.613164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:55:38.614427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:55:38.623845 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:55:38.624490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:55:38.637463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:55:38.650000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:55:38.650000 audit[1557]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2e867090 a2=420 a3=0 items=0 ppid=1533 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:38.650000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:55:38.652000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:55:38.652000 audit[1557]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2e869520 a2=420 a3=0 items=0 ppid=1533 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:55:38.652000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:55:38.665447 augenrules[1557]: No rules Jan 14 00:55:38.654423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:55:38.666492 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:55:38.667081 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:55:38.676512 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:55:38.677011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:55:38.678124 systemd-networkd[1510]: lo: Link UP Jan 14 00:55:38.678133 systemd-networkd[1510]: lo: Gained carrier Jan 14 00:55:38.684834 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:55:38.684945 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:55:38.686559 systemd-networkd[1510]: eth0: Link UP Jan 14 00:55:38.689113 systemd-networkd[1510]: eth0: Gained carrier Jan 14 00:55:38.689182 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:55:38.690770 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:55:38.721467 systemd[1]: Reached target network.target - Network. Jan 14 00:55:38.740547 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:55:38.760496 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:55:38.761654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:55:38.761733 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:55:38.767707 systemd-networkd[1510]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 00:55:38.882439 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:55:39.046404 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 00:55:39.727788 systemd-timesyncd[1548]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 00:55:39.727863 systemd-timesyncd[1548]: Initial clock synchronization to Wed 2026-01-14 00:55:39.727611 UTC. Jan 14 00:55:39.728088 systemd-resolved[1284]: Clock change detected. Flushing caches. Jan 14 00:55:39.860593 kernel: kvm_amd: TSC scaling supported Jan 14 00:55:39.860724 kernel: kvm_amd: Nested Virtualization enabled Jan 14 00:55:39.860754 kernel: kvm_amd: Nested Paging enabled Jan 14 00:55:39.860776 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 00:55:39.860801 kernel: kvm_amd: PMU virtualization is disabled Jan 14 00:55:40.211763 kernel: EDAC MC: Ver: 3.0.0 Jan 14 00:55:40.223897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:55:40.237941 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:55:40.460245 ldconfig[1497]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:55:40.473304 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:55:40.484834 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:55:40.544587 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:55:40.560645 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:55:40.570989 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:55:40.581872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:55:40.589729 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 00:55:40.600043 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:55:40.607078 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:55:40.619206 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:55:40.626400 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:55:40.633380 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:55:40.641242 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:55:40.641324 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:55:40.647378 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:55:40.654616 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:55:40.664096 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:55:40.674610 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:55:40.682685 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:55:40.690638 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:55:40.718313 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:55:40.725317 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:55:40.732865 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:55:40.739953 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:55:40.745836 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:55:40.751096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:55:40.751262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:55:40.753665 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:55:40.763432 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:55:40.789634 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:55:40.798306 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:55:40.806590 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:55:40.812581 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:55:40.823736 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 00:55:40.829262 jq[1586]: false Jan 14 00:55:40.830745 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:55:40.841767 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:55:40.842985 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing passwd entry cache Jan 14 00:55:40.843419 oslogin_cache_refresh[1588]: Refreshing passwd entry cache Jan 14 00:55:40.847041 extend-filesystems[1587]: Found /dev/vda6 Jan 14 00:55:40.854355 extend-filesystems[1587]: Found /dev/vda9 Jan 14 00:55:40.858932 extend-filesystems[1587]: Checking size of /dev/vda9 Jan 14 00:55:40.871877 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:55:40.879593 extend-filesystems[1587]: Resized partition /dev/vda9 Jan 14 00:55:40.899734 extend-filesystems[1601]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:55:40.933776 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 00:55:40.888803 oslogin_cache_refresh[1588]: Failure getting users, quitting Jan 14 00:55:40.894799 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:55:40.934190 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting users, quitting Jan 14 00:55:40.934190 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 00:55:40.934190 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing group entry cache Jan 14 00:55:40.888835 oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 00:55:40.888888 oslogin_cache_refresh[1588]: Refreshing group entry cache Jan 14 00:55:40.944067 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:55:42.585855 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 00:55:40.953778 oslogin_cache_refresh[1588]: Failure getting groups, quitting Jan 14 00:55:42.623206 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting groups, quitting Jan 14 00:55:42.623206 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 00:55:40.957894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:55:40.953804 oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 00:55:42.639767 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 00:55:42.639767 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 00:55:42.639767 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 00:55:41.247861 systemd-networkd[1510]: eth0: Gained IPv6LL Jan 14 00:55:42.675068 extend-filesystems[1587]: Resized filesystem in /dev/vda9 Jan 14 00:55:42.650311 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:55:42.656026 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:55:42.674960 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:55:42.687081 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:55:42.710458 jq[1616]: true Jan 14 00:55:42.780430 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:55:42.789718 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:55:42.790212 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:55:42.790905 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:55:42.791618 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:55:42.799295 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 00:55:42.799829 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 00:55:42.808282 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:55:42.808706 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:55:42.820807 systemd[1]: motdgen.service: Consumed 1.150s CPU time, 2.3M memory peak. Jan 14 00:55:42.823722 update_engine[1614]: I20260114 00:55:42.823632 1614 main.cc:92] Flatcar Update Engine starting Jan 14 00:55:42.825389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:55:42.826051 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:55:42.876618 jq[1624]: true Jan 14 00:55:43.603814 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:55:43.704090 tar[1623]: linux-amd64/LICENSE Jan 14 00:55:43.704813 tar[1623]: linux-amd64/helm Jan 14 00:55:43.897676 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 00:55:43.914428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:43.925092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:55:44.005952 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:55:44.039120 dbus-daemon[1584]: [system] SELinux support is enabled Jan 14 00:55:44.045070 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:55:44.065262 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:55:44.065447 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:55:44.075762 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:55:44.075892 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:55:44.078602 systemd-logind[1605]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 00:55:44.078644 systemd-logind[1605]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 00:55:44.088036 systemd-logind[1605]: New seat seat0. Jan 14 00:55:44.094366 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:55:44.103865 update_engine[1614]: I20260114 00:55:44.096943 1614 update_check_scheduler.cc:74] Next update check in 2m48s Jan 14 00:55:44.105446 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:55:44.162720 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:55:44.209269 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:55:44.228959 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 00:55:44.247397 bash[1668]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:55:44.264808 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:55:44.294366 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:55:44.333929 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 00:55:44.348809 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:55:44.363947 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:55:44.369370 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:55:44.434225 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:55:44.669201 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 00:55:44.670226 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 00:55:44.695698 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:55:44.791899 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:55:44.806357 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:55:44.984690 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 00:55:44.995928 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:55:45.249762 locksmithd[1683]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:55:46.664613 containerd[1637]: time="2026-01-14T00:55:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:55:46.698023 containerd[1637]: time="2026-01-14T00:55:46.695428761Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:55:46.900638 containerd[1637]: time="2026-01-14T00:55:46.899985614Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="545.429µs" Jan 14 00:55:46.900638 containerd[1637]: time="2026-01-14T00:55:46.900316532Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:55:46.901940 containerd[1637]: time="2026-01-14T00:55:46.901574901Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:55:46.901940 containerd[1637]: time="2026-01-14T00:55:46.901752352Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:55:46.903673 containerd[1637]: time="2026-01-14T00:55:46.902838761Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:55:46.903673 containerd[1637]: time="2026-01-14T00:55:46.902872414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:55:46.903673 containerd[1637]: time="2026-01-14T00:55:46.903073790Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:55:46.903673 containerd[1637]: time="2026-01-14T00:55:46.903095751Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.903783464Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.903806367Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.903823158Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.903835011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.904250717Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.904270624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:55:46.905315 containerd[1637]: time="2026-01-14T00:55:46.904831220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.909443 containerd[1637]: time="2026-01-14T00:55:46.905603252Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.909443 containerd[1637]: time="2026-01-14T00:55:46.905653496Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:55:46.909443 containerd[1637]: time="2026-01-14T00:55:46.905669746Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:55:46.909443 containerd[1637]: time="2026-01-14T00:55:46.906429785Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:55:46.909443 containerd[1637]: time="2026-01-14T00:55:46.908228473Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:55:46.909443 containerd[1637]: time="2026-01-14T00:55:46.908402287Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:55:46.937611 containerd[1637]: time="2026-01-14T00:55:46.936898718Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:55:46.937611 containerd[1637]: time="2026-01-14T00:55:46.937311872Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:55:46.938653 containerd[1637]: time="2026-01-14T00:55:46.938448073Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:55:46.938653 containerd[1637]: time="2026-01-14T00:55:46.938585469Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:55:46.938653 containerd[1637]: time="2026-01-14T00:55:46.938604395Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:55:46.938653 containerd[1637]: time="2026-01-14T00:55:46.938618190Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:55:46.938653 containerd[1637]: time="2026-01-14T00:55:46.938629181Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:55:46.938653 containerd[1637]: time="2026-01-14T00:55:46.938638087Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:55:46.939061 containerd[1637]: time="2026-01-14T00:55:46.938959257Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:55:46.939061 containerd[1637]: time="2026-01-14T00:55:46.938983753Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:55:46.939061 containerd[1637]: time="2026-01-14T00:55:46.938997739Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:55:46.939061 containerd[1637]: time="2026-01-14T00:55:46.939007647Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:55:46.939061 containerd[1637]: time="2026-01-14T00:55:46.939019219Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:55:46.939367 containerd[1637]: time="2026-01-14T00:55:46.939105921Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:55:46.939526 containerd[1637]: time="2026-01-14T00:55:46.939407985Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:55:46.939526 containerd[1637]: time="2026-01-14T00:55:46.939430807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:55:46.939526 containerd[1637]: time="2026-01-14T00:55:46.939445585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:55:46.939652 containerd[1637]: time="2026-01-14T00:55:46.939616324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:55:46.939652 containerd[1637]: time="2026-01-14T00:55:46.939632494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:55:46.939835 containerd[1637]: time="2026-01-14T00:55:46.939767095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:55:46.939835 containerd[1637]: time="2026-01-14T00:55:46.939781742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:55:46.939981 containerd[1637]: time="2026-01-14T00:55:46.939967960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:55:46.940445 containerd[1637]: time="2026-01-14T00:55:46.940298317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:55:46.940616 containerd[1637]: time="2026-01-14T00:55:46.940580634Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:55:46.941053 containerd[1637]: time="2026-01-14T00:55:46.940826664Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:55:46.941635 containerd[1637]: time="2026-01-14T00:55:46.941139778Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:55:46.943735 containerd[1637]: time="2026-01-14T00:55:46.943300351Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:55:46.943806 containerd[1637]: time="2026-01-14T00:55:46.943793202Z" level=info msg="Start snapshots syncer" Jan 14 00:55:46.944441 containerd[1637]: time="2026-01-14T00:55:46.944344401Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:55:46.946425 containerd[1637]: time="2026-01-14T00:55:46.945959927Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.946585916Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.946780889Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947130953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947216503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947228626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947238454Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947248493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947260205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947329805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947343350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947353940Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947621469Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947776348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:55:46.947783 containerd[1637]: time="2026-01-14T00:55:46.947787920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947797087Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947804200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947813998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947824317Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947840167Z" level=info msg="runtime interface created" Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947845297Z" level=info msg="created NRI interface" Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947917221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947931918Z" level=info msg="Connect containerd service" Jan 14 00:55:46.948075 containerd[1637]: time="2026-01-14T00:55:46.947951886Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:55:46.951621 containerd[1637]: time="2026-01-14T00:55:46.951342105Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:55:48.165300 tar[1623]: linux-amd64/README.md Jan 14 00:55:48.205327 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.475592250Z" level=info msg="Start subscribing containerd event" Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.476102462Z" level=info msg="Start recovering state" Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.476345874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.476434960Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.476768766Z" level=info msg="Start event monitor" Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.476852182Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:55:48.476863 containerd[1637]: time="2026-01-14T00:55:48.476869194Z" level=info msg="Start streaming server" Jan 14 00:55:48.478279 containerd[1637]: time="2026-01-14T00:55:48.476978678Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:55:48.478279 containerd[1637]: time="2026-01-14T00:55:48.476990750Z" level=info msg="runtime interface starting up..." Jan 14 00:55:48.478279 containerd[1637]: time="2026-01-14T00:55:48.476999236Z" level=info msg="starting plugins..." Jan 14 00:55:48.478279 containerd[1637]: time="2026-01-14T00:55:48.477078895Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:55:48.478386 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:55:48.486003 containerd[1637]: time="2026-01-14T00:55:48.485772984Z" level=info msg="containerd successfully booted in 1.830074s" Jan 14 00:55:49.999998 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:55:50.016349 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:59968.service - OpenSSH per-connection server daemon (10.0.0.1:59968). Jan 14 00:55:50.346392 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 59968 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:50.353143 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:50.377732 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:55:50.383311 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:55:50.397726 systemd-logind[1605]: New session 1 of user core. Jan 14 00:55:50.491952 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:55:50.497869 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:55:50.543932 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:50.552609 systemd-logind[1605]: New session 2 of user core. Jan 14 00:55:50.868103 systemd[1729]: Queued start job for default target default.target. Jan 14 00:55:50.890367 systemd[1729]: Created slice app.slice - User Application Slice. Jan 14 00:55:50.890449 systemd[1729]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:55:50.890616 systemd[1729]: Reached target paths.target - Paths. Jan 14 00:55:50.890769 systemd[1729]: Reached target timers.target - Timers. Jan 14 00:55:50.893880 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:55:50.903340 systemd[1729]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:55:50.937875 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:55:50.939761 systemd[1729]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:55:50.944640 systemd[1729]: Reached target sockets.target - Sockets. Jan 14 00:55:50.944791 systemd[1729]: Reached target basic.target - Basic System. Jan 14 00:55:50.944875 systemd[1729]: Reached target default.target - Main User Target. Jan 14 00:55:50.944933 systemd[1729]: Startup finished in 379ms. Jan 14 00:55:50.947681 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:55:50.971681 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:55:51.019102 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:59980.service - OpenSSH per-connection server daemon (10.0.0.1:59980). Jan 14 00:55:51.145608 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 59980 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:51.150019 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:51.172741 systemd-logind[1605]: New session 3 of user core. Jan 14 00:55:51.191950 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 00:55:51.241292 sshd[1751]: Connection closed by 10.0.0.1 port 59980 Jan 14 00:55:51.245761 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jan 14 00:55:51.257302 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:59980.service: Deactivated successfully. Jan 14 00:55:51.263430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:51.264840 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 00:55:51.268748 systemd-logind[1605]: Session 3 logged out. Waiting for processes to exit. Jan 14 00:55:51.271436 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:55:51.283876 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:51.288872 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:59990.service - OpenSSH per-connection server daemon (10.0.0.1:59990). Jan 14 00:55:51.289885 systemd[1]: Startup finished in 24.362s (kernel) + 58.786s (initrd) + 23.725s (userspace) = 1min 46.874s. Jan 14 00:55:51.293261 systemd-logind[1605]: Removed session 3. Jan 14 00:55:51.425048 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 59990 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:55:51.427675 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:55:51.447446 systemd-logind[1605]: New session 4 of user core. Jan 14 00:55:51.457755 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:55:51.497462 sshd[1766]: Connection closed by 10.0.0.1 port 59990 Jan 14 00:55:51.499866 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 14 00:55:51.507435 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:59990.service: Deactivated successfully. Jan 14 00:55:51.510112 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 00:55:51.514616 systemd-logind[1605]: Session 4 logged out. Waiting for processes to exit. Jan 14 00:55:51.518068 systemd-logind[1605]: Removed session 4. Jan 14 00:55:54.439650 kubelet[1757]: E0114 00:55:54.438713 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:54.446841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:54.447248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:54.449885 systemd[1]: kubelet.service: Consumed 7.217s CPU time, 271M memory peak. Jan 14 00:56:01.541811 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:42332.service - OpenSSH per-connection server daemon (10.0.0.1:42332). Jan 14 00:56:02.064061 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 42332 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:56:02.075867 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:56:02.158991 systemd-logind[1605]: New session 5 of user core. Jan 14 00:56:02.170095 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 00:56:02.250027 sshd[1783]: Connection closed by 10.0.0.1 port 42332 Jan 14 00:56:02.250926 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 14 00:56:02.274008 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:42332.service: Deactivated successfully. Jan 14 00:56:02.278996 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 00:56:02.281995 systemd-logind[1605]: Session 5 logged out. Waiting for processes to exit. Jan 14 00:56:02.288803 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:42344.service - OpenSSH per-connection server daemon (10.0.0.1:42344). Jan 14 00:56:02.292811 systemd-logind[1605]: Removed session 5. Jan 14 00:56:04.258777 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 42344 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:56:04.268737 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:56:04.301960 systemd-logind[1605]: New session 6 of user core. Jan 14 00:56:04.320315 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 00:56:04.361968 sshd[1793]: Connection closed by 10.0.0.1 port 42344 Jan 14 00:56:04.362455 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jan 14 00:56:04.381114 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:42344.service: Deactivated successfully. Jan 14 00:56:04.381927 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:42344.service: Consumed 1.830s CPU time, 4.2M memory peak. Jan 14 00:56:04.387721 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 00:56:04.391454 systemd-logind[1605]: Session 6 logged out. Waiting for processes to exit. Jan 14 00:56:04.400641 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:59250.service - OpenSSH per-connection server daemon (10.0.0.1:59250). Jan 14 00:56:04.402832 systemd-logind[1605]: Removed session 6. Jan 14 00:56:04.509919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:56:04.517979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:04.549388 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 59250 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:56:04.551748 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:56:04.597915 systemd-logind[1605]: New session 7 of user core. Jan 14 00:56:04.612409 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 00:56:04.664640 sshd[1806]: Connection closed by 10.0.0.1 port 59250 Jan 14 00:56:04.666797 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 14 00:56:04.725856 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:59250.service: Deactivated successfully. Jan 14 00:56:04.730024 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 00:56:04.735987 systemd-logind[1605]: Session 7 logged out. Waiting for processes to exit. Jan 14 00:56:04.743062 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:59266.service - OpenSSH per-connection server daemon (10.0.0.1:59266). Jan 14 00:56:04.745135 systemd-logind[1605]: Removed session 7. Jan 14 00:56:04.891334 sshd[1812]: Accepted publickey for core from 10.0.0.1 port 59266 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 00:56:04.895140 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:56:04.906456 systemd-logind[1605]: New session 8 of user core. Jan 14 00:56:04.919974 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 00:56:05.030044 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 00:56:05.030947 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:56:05.869200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:05.888837 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:06.461695 kubelet[1833]: E0114 00:56:06.460277 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:06.470692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:06.470984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:06.472446 systemd[1]: kubelet.service: Consumed 1.568s CPU time, 110.9M memory peak. Jan 14 00:56:16.647082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:56:16.690411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:17.446664 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 00:56:17.689035 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 00:56:18.836154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:18.855917 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:19.620962 kubelet[1862]: E0114 00:56:19.611430 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:19.627648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:19.627942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:19.629101 systemd[1]: kubelet.service: Consumed 2.496s CPU time, 110.5M memory peak. Jan 14 00:56:25.942769 dockerd[1856]: time="2026-01-14T00:56:25.940063891Z" level=info msg="Starting up" Jan 14 00:56:25.951163 dockerd[1856]: time="2026-01-14T00:56:25.951026793Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 00:56:26.170590 dockerd[1856]: time="2026-01-14T00:56:26.169888610Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 00:56:26.407689 systemd[1]: var-lib-docker-metacopy\x2dcheck953839784-merged.mount: Deactivated successfully. Jan 14 00:56:26.495800 dockerd[1856]: time="2026-01-14T00:56:26.495329273Z" level=info msg="Loading containers: start." Jan 14 00:56:26.567192 kernel: Initializing XFRM netlink socket Jan 14 00:56:28.143880 systemd-networkd[1510]: docker0: Link UP Jan 14 00:56:28.169673 dockerd[1856]: time="2026-01-14T00:56:28.169420278Z" level=info msg="Loading containers: done." Jan 14 00:56:28.234758 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2415849281-merged.mount: Deactivated successfully. Jan 14 00:56:28.248653 dockerd[1856]: time="2026-01-14T00:56:28.247117972Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 00:56:28.248653 dockerd[1856]: time="2026-01-14T00:56:28.247994587Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 00:56:28.250191 dockerd[1856]: time="2026-01-14T00:56:28.250030485Z" level=info msg="Initializing buildkit" Jan 14 00:56:28.402927 dockerd[1856]: time="2026-01-14T00:56:28.401909938Z" level=info msg="Completed buildkit initialization" Jan 14 00:56:28.426242 dockerd[1856]: time="2026-01-14T00:56:28.424750193Z" level=info msg="Daemon has completed initialization" Jan 14 00:56:28.426242 dockerd[1856]: time="2026-01-14T00:56:28.424929678Z" level=info msg="API listen on /run/docker.sock" Jan 14 00:56:28.425782 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 00:56:29.452194 update_engine[1614]: I20260114 00:56:29.449798 1614 update_attempter.cc:509] Updating boot flags... Jan 14 00:56:29.863011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 00:56:29.969747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:31.577390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:31.606376 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:32.140927 kubelet[2107]: E0114 00:56:32.140045 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:32.164102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:32.173677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:32.177017 systemd[1]: kubelet.service: Consumed 1.065s CPU time, 109M memory peak. Jan 14 00:56:33.229752 containerd[1637]: time="2026-01-14T00:56:33.228153397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 00:56:35.695062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626333602.mount: Deactivated successfully. Jan 14 00:56:42.389921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 00:56:42.441639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:45.242759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:45.297996 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:47.889924 kubelet[2185]: E0114 00:56:47.889023 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:47.938973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:47.941016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:47.976036 systemd[1]: kubelet.service: Consumed 2.548s CPU time, 112.7M memory peak. Jan 14 00:56:52.863833 containerd[1637]: time="2026-01-14T00:56:52.861234172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:52.881219 containerd[1637]: time="2026-01-14T00:56:52.881158674Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29139550" Jan 14 00:56:52.892284 containerd[1637]: time="2026-01-14T00:56:52.892176717Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:52.945907 containerd[1637]: time="2026-01-14T00:56:52.945072408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:56:52.986301 containerd[1637]: time="2026-01-14T00:56:52.984440871Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 19.755903168s" Jan 14 00:56:52.986301 containerd[1637]: time="2026-01-14T00:56:52.984860925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 00:56:52.996105 containerd[1637]: time="2026-01-14T00:56:52.996067100Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 00:56:58.113841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 00:56:58.121850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:59.964106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:00.005307 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:00.345799 kubelet[2205]: E0114 00:57:00.345050 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:00.354146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:00.355212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:00.361131 systemd[1]: kubelet.service: Consumed 1.811s CPU time, 110.2M memory peak. Jan 14 00:57:02.969778 containerd[1637]: time="2026-01-14T00:57:02.969073968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:02.973265 containerd[1637]: time="2026-01-14T00:57:02.973230555Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 00:57:02.982872 containerd[1637]: time="2026-01-14T00:57:02.981137862Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:03.009314 containerd[1637]: time="2026-01-14T00:57:03.009032339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:03.018842 containerd[1637]: time="2026-01-14T00:57:03.015312002Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 10.019051049s" Jan 14 00:57:03.018842 containerd[1637]: time="2026-01-14T00:57:03.017904151Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 00:57:03.025789 containerd[1637]: time="2026-01-14T00:57:03.025321908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 00:57:10.368293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 00:57:10.393742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:12.414114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:12.495687 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:14.664984 kubelet[2226]: E0114 00:57:14.662943 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:14.674138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:14.674928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:14.682910 systemd[1]: kubelet.service: Consumed 2.535s CPU time, 110.3M memory peak. Jan 14 00:57:16.192324 containerd[1637]: time="2026-01-14T00:57:16.184237742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:16.206979 containerd[1637]: time="2026-01-14T00:57:16.206776352Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 00:57:16.233948 containerd[1637]: time="2026-01-14T00:57:16.232317196Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:16.590180 containerd[1637]: time="2026-01-14T00:57:16.585093317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:16.594225 containerd[1637]: time="2026-01-14T00:57:16.593280567Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 13.567481049s" Jan 14 00:57:16.595320 containerd[1637]: time="2026-01-14T00:57:16.593330149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 00:57:16.604917 containerd[1637]: time="2026-01-14T00:57:16.604879672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 00:57:24.893995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 00:57:24.945684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:28.914049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:29.132093 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:30.733363 kubelet[2247]: E0114 00:57:30.733080 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:30.761928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:30.768060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:30.775455 systemd[1]: kubelet.service: Consumed 2.403s CPU time, 110.9M memory peak. Jan 14 00:57:32.203284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997221803.mount: Deactivated successfully. Jan 14 00:57:41.030635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 14 00:57:41.205845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:43.748252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:43.840808 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:44.220698 containerd[1637]: time="2026-01-14T00:57:44.220310920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:44.229016 containerd[1637]: time="2026-01-14T00:57:44.226827317Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31927016" Jan 14 00:57:44.235289 containerd[1637]: time="2026-01-14T00:57:44.235229297Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:44.256664 containerd[1637]: time="2026-01-14T00:57:44.254790303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:44.256664 containerd[1637]: time="2026-01-14T00:57:44.256172427Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 27.648264818s" Jan 14 00:57:44.256664 containerd[1637]: time="2026-01-14T00:57:44.256297958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 00:57:44.264800 containerd[1637]: time="2026-01-14T00:57:44.264770656Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 00:57:44.450969 kubelet[2267]: E0114 00:57:44.449068 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:44.469214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:44.469879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:44.471104 systemd[1]: kubelet.service: Consumed 1.851s CPU time, 110.7M memory peak. Jan 14 00:57:47.209460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1008377931.mount: Deactivated successfully. Jan 14 00:57:54.618941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 14 00:57:54.633819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:55.575054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:55.616385 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:56.306446 kubelet[2336]: E0114 00:57:56.305685 2336 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:56.341368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:56.343072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:56.348995 systemd[1]: kubelet.service: Consumed 882ms CPU time, 110M memory peak. Jan 14 00:58:01.562432 containerd[1637]: time="2026-01-14T00:58:01.561453042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:01.571974 containerd[1637]: time="2026-01-14T00:58:01.569914655Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931344" Jan 14 00:58:01.576425 containerd[1637]: time="2026-01-14T00:58:01.576343787Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:01.589923 containerd[1637]: time="2026-01-14T00:58:01.589815418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:01.592798 containerd[1637]: time="2026-01-14T00:58:01.592752136Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 17.325992264s" Jan 14 00:58:01.593153 containerd[1637]: time="2026-01-14T00:58:01.593027045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 00:58:01.601930 containerd[1637]: time="2026-01-14T00:58:01.601296581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 00:58:02.818283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188306925.mount: Deactivated successfully. Jan 14 00:58:02.854424 containerd[1637]: time="2026-01-14T00:58:02.853150900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:58:02.863782 containerd[1637]: time="2026-01-14T00:58:02.862792202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 00:58:02.868843 containerd[1637]: time="2026-01-14T00:58:02.868795825Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:58:02.883401 containerd[1637]: time="2026-01-14T00:58:02.883339167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:58:02.897984 containerd[1637]: time="2026-01-14T00:58:02.889212037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.28786353s" Jan 14 00:58:02.897984 containerd[1637]: time="2026-01-14T00:58:02.890331646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 00:58:02.897984 containerd[1637]: time="2026-01-14T00:58:02.895904646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 00:58:04.128433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909099133.mount: Deactivated successfully. Jan 14 00:58:06.360406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 14 00:58:06.369364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:06.916935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:06.942283 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:07.223225 kubelet[2409]: E0114 00:58:07.222117 2409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:07.230719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:07.231283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:07.232410 systemd[1]: kubelet.service: Consumed 663ms CPU time, 109.4M memory peak. Jan 14 00:58:11.272766 containerd[1637]: time="2026-01-14T00:58:11.272238206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:11.279262 containerd[1637]: time="2026-01-14T00:58:11.279190304Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57629071" Jan 14 00:58:11.283760 containerd[1637]: time="2026-01-14T00:58:11.283250483Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:11.290834 containerd[1637]: time="2026-01-14T00:58:11.289829953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:11.294036 containerd[1637]: time="2026-01-14T00:58:11.293456316Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.397509272s" Jan 14 00:58:11.294036 containerd[1637]: time="2026-01-14T00:58:11.293834095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 00:58:17.383373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 14 00:58:17.402295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:19.744302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:19.801153 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:20.858760 kubelet[2453]: E0114 00:58:20.857745 2453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:20.968320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:20.969122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:21.001062 systemd[1]: kubelet.service: Consumed 1.810s CPU time, 110.2M memory peak. Jan 14 00:58:32.483999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.531836 1614 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.532326 1614 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.535147 1614 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.768247 1614 omaha_request_params.cc:62] Current group set to alpha Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.769272 1614 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.769289 1614 update_attempter.cc:643] Scheduling an action processor start. Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.769315 1614 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.770061 1614 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.770246 1614 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.770262 1614 omaha_request_action.cc:272] Request: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.770272 1614 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:58:32.800837 update_engine[1614]: I20260114 00:58:32.791196 1614 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:58:32.851278 update_engine[1614]: I20260114 00:58:32.840368 1614 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:58:32.932072 update_engine[1614]: E20260114 00:58:32.871327 1614 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:58:32.932072 update_engine[1614]: I20260114 00:58:32.872200 1614 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 00:58:32.951998 locksmithd[1683]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 00:58:33.675356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:35.550929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:35.629417 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:36.185183 kubelet[2470]: E0114 00:58:36.184191 2470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:36.203926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:36.204812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:36.206027 systemd[1]: kubelet.service: Consumed 1.344s CPU time, 110.1M memory peak. Jan 14 00:58:36.782872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:36.783105 systemd[1]: kubelet.service: Consumed 1.344s CPU time, 110.1M memory peak. Jan 14 00:58:36.810171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:36.976235 systemd[1]: Reload requested from client PID 2485 ('systemctl') (unit session-8.scope)... Jan 14 00:58:36.976820 systemd[1]: Reloading... Jan 14 00:58:37.231690 zram_generator::config[2531]: No configuration found. Jan 14 00:58:37.833359 systemd[1]: Reloading finished in 855 ms. Jan 14 00:58:38.078110 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:58:38.078978 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:58:38.081190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:38.081876 systemd[1]: kubelet.service: Consumed 280ms CPU time, 98.4M memory peak. Jan 14 00:58:38.090267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:38.685329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:38.729090 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:58:39.241332 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:58:39.241332 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:58:39.243826 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:58:39.244066 kubelet[2579]: I0114 00:58:39.243900 2579 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:58:41.359405 kubelet[2579]: I0114 00:58:41.358445 2579 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 00:58:41.359405 kubelet[2579]: I0114 00:58:41.359167 2579 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:58:41.371272 kubelet[2579]: I0114 00:58:41.369008 2579 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:58:41.723149 kubelet[2579]: I0114 00:58:41.718923 2579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:58:41.729042 kubelet[2579]: E0114 00:58:41.729002 2579 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:58:41.812923 kubelet[2579]: I0114 00:58:41.812886 2579 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:58:41.864309 kubelet[2579]: I0114 00:58:41.861867 2579 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:58:41.868041 kubelet[2579]: I0114 00:58:41.865255 2579 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:58:41.870435 kubelet[2579]: I0114 00:58:41.867619 2579 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:58:41.874434 kubelet[2579]: I0114 00:58:41.870931 2579 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:58:41.874434 kubelet[2579]: I0114 00:58:41.870955 2579 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 00:58:41.874434 kubelet[2579]: I0114 00:58:41.872276 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:58:41.965412 kubelet[2579]: I0114 00:58:41.963385 2579 kubelet.go:480] "Attempting to sync node with API server" Jan 14 00:58:41.965412 kubelet[2579]: I0114 00:58:41.963459 2579 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:58:41.965412 kubelet[2579]: I0114 00:58:41.963886 2579 kubelet.go:386] "Adding apiserver pod source" Jan 14 00:58:41.965412 kubelet[2579]: I0114 00:58:41.964027 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:58:41.999042 kubelet[2579]: E0114 00:58:41.996447 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:58:41.999042 kubelet[2579]: E0114 00:58:41.997886 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:58:42.021068 kubelet[2579]: I0114 00:58:42.020266 2579 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:58:42.022273 kubelet[2579]: I0114 00:58:42.022248 2579 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:58:42.030949 kubelet[2579]: W0114 00:58:42.030926 2579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 00:58:42.062918 kubelet[2579]: I0114 00:58:42.062881 2579 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:58:42.068823 kubelet[2579]: I0114 00:58:42.066374 2579 server.go:1289] "Started kubelet" Jan 14 00:58:42.094834 kubelet[2579]: I0114 00:58:42.088243 2579 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:58:42.106093 kubelet[2579]: I0114 00:58:42.103913 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:58:42.140261 kubelet[2579]: I0114 00:58:42.140216 2579 server.go:317] "Adding debug handlers to kubelet server" Jan 14 00:58:42.142337 kubelet[2579]: I0114 00:58:42.142310 2579 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:58:42.149271 kubelet[2579]: I0114 00:58:42.145214 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:58:42.191986 kubelet[2579]: I0114 00:58:42.191956 2579 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:58:42.193355 kubelet[2579]: I0114 00:58:42.193334 2579 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:58:42.200946 kubelet[2579]: E0114 00:58:42.194325 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.220387 kubelet[2579]: I0114 00:58:42.219350 2579 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:58:42.234030 kubelet[2579]: E0114 00:58:42.170140 2579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a73115a4e8938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 00:58:42.06307564 +0000 UTC m=+3.244827029,LastTimestamp:2026-01-14 00:58:42.06307564 +0000 UTC m=+3.244827029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 00:58:42.257386 kubelet[2579]: I0114 00:58:42.254965 2579 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:58:42.257386 kubelet[2579]: I0114 00:58:42.255145 2579 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:58:42.257386 kubelet[2579]: I0114 00:58:42.256220 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:58:42.257386 kubelet[2579]: E0114 00:58:42.257209 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Jan 14 00:58:42.321826 kubelet[2579]: E0114 00:58:42.321135 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:58:42.344373 kubelet[2579]: E0114 00:58:42.341406 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.351024 kubelet[2579]: E0114 00:58:42.349451 2579 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:58:42.424142 kubelet[2579]: I0114 00:58:42.423056 2579 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:58:42.441942 kubelet[2579]: E0114 00:58:42.441619 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.460435 kubelet[2579]: E0114 00:58:42.460134 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Jan 14 00:58:42.550329 kubelet[2579]: E0114 00:58:42.545455 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.569237 kubelet[2579]: I0114 00:58:42.569212 2579 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:58:42.569370 kubelet[2579]: I0114 00:58:42.569359 2579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:58:42.571240 kubelet[2579]: I0114 00:58:42.571210 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:58:42.678307 kubelet[2579]: E0114 00:58:42.654677 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.681967 kubelet[2579]: I0114 00:58:42.681247 2579 policy_none.go:49] "None policy: Start" Jan 14 00:58:42.682431 kubelet[2579]: I0114 00:58:42.682410 2579 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:58:42.682893 kubelet[2579]: I0114 00:58:42.682878 2579 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:58:42.683274 kubelet[2579]: I0114 00:58:42.682997 2579 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 00:58:42.850833 kubelet[2579]: I0114 00:58:42.705329 2579 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 00:58:42.850833 kubelet[2579]: I0114 00:58:42.706296 2579 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 00:58:42.850833 kubelet[2579]: I0114 00:58:42.726088 2579 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:58:42.850833 kubelet[2579]: I0114 00:58:42.726432 2579 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 00:58:42.850833 kubelet[2579]: E0114 00:58:42.733391 2579 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:58:42.850833 kubelet[2579]: E0114 00:58:42.745120 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:58:42.850833 kubelet[2579]: E0114 00:58:42.790133 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.850833 kubelet[2579]: E0114 00:58:42.833903 2579 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:58:42.867241 kubelet[2579]: E0114 00:58:42.865183 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Jan 14 00:58:42.899414 kubelet[2579]: E0114 00:58:42.897193 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:42.934025 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 00:58:43.055804 kubelet[2579]: E0114 00:58:43.052396 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:43.055804 kubelet[2579]: E0114 00:58:43.055005 2579 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 00:58:43.079451 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 00:58:43.127809 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 00:58:43.157331 kubelet[2579]: E0114 00:58:43.154219 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:43.170981 kubelet[2579]: E0114 00:58:43.170945 2579 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:58:43.177383 kubelet[2579]: E0114 00:58:43.176214 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:58:43.180177 kubelet[2579]: E0114 00:58:43.177869 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:58:43.180287 kubelet[2579]: I0114 00:58:43.180269 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:58:43.180402 kubelet[2579]: I0114 00:58:43.180365 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:58:43.187317 kubelet[2579]: I0114 00:58:43.187298 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:58:43.198235 kubelet[2579]: E0114 00:58:43.195422 2579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:58:43.198235 kubelet[2579]: E0114 00:58:43.197338 2579 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 00:58:43.331385 kubelet[2579]: I0114 00:58:43.330209 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:43.341179 kubelet[2579]: E0114 00:58:43.337993 2579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 14 00:58:43.435872 update_engine[1614]: I20260114 00:58:43.432089 1614 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:58:43.435872 update_engine[1614]: I20260114 00:58:43.433337 1614 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:58:43.442438 update_engine[1614]: I20260114 00:58:43.438901 1614 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:58:43.472398 update_engine[1614]: E20260114 00:58:43.469842 1614 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:58:43.472398 update_engine[1614]: I20260114 00:58:43.471130 1614 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 00:58:43.545217 kubelet[2579]: I0114 00:58:43.544935 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:43.549009 kubelet[2579]: E0114 00:58:43.547288 2579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 14 00:58:43.572113 kubelet[2579]: I0114 00:58:43.570810 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6061e99ba5868730e624ac4fc598fefe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6061e99ba5868730e624ac4fc598fefe\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:58:43.572113 kubelet[2579]: I0114 00:58:43.572045 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6061e99ba5868730e624ac4fc598fefe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6061e99ba5868730e624ac4fc598fefe\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:58:43.572113 kubelet[2579]: I0114 00:58:43.572082 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6061e99ba5868730e624ac4fc598fefe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6061e99ba5868730e624ac4fc598fefe\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:58:43.632395 systemd[1]: Created slice kubepods-burstable-pod6061e99ba5868730e624ac4fc598fefe.slice - libcontainer container kubepods-burstable-pod6061e99ba5868730e624ac4fc598fefe.slice. Jan 14 00:58:43.673373 kubelet[2579]: I0114 00:58:43.672195 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:58:43.673373 kubelet[2579]: I0114 00:58:43.672360 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:58:43.684262 kubelet[2579]: I0114 00:58:43.678974 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:58:43.684262 kubelet[2579]: I0114 00:58:43.679129 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:58:43.684262 kubelet[2579]: I0114 00:58:43.679154 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 00:58:43.684262 kubelet[2579]: I0114 00:58:43.679178 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:58:43.740070 kubelet[2579]: E0114 00:58:43.709336 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:43.740070 kubelet[2579]: E0114 00:58:43.732455 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Jan 14 00:58:43.740070 kubelet[2579]: E0114 00:58:43.737188 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:43.742375 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 14 00:58:43.803886 containerd[1637]: time="2026-01-14T00:58:43.795393602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6061e99ba5868730e624ac4fc598fefe,Namespace:kube-system,Attempt:0,}" Jan 14 00:58:43.884260 kubelet[2579]: E0114 00:58:43.884102 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:58:43.905016 kubelet[2579]: E0114 00:58:43.903394 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:43.905261 kubelet[2579]: E0114 00:58:43.905149 2579 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:58:43.905261 kubelet[2579]: E0114 00:58:43.905223 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:43.915345 containerd[1637]: time="2026-01-14T00:58:43.915279657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 14 00:58:43.937115 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 14 00:58:43.962195 kubelet[2579]: E0114 00:58:43.961390 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:43.967107 kubelet[2579]: E0114 00:58:43.966258 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:43.970047 kubelet[2579]: I0114 00:58:43.969114 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:43.976901 kubelet[2579]: E0114 00:58:43.976446 2579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 14 00:58:43.979067 containerd[1637]: time="2026-01-14T00:58:43.978319472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 14 00:58:44.102075 kubelet[2579]: E0114 00:58:44.098317 2579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a73115a4e8938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 00:58:42.06307564 +0000 UTC m=+3.244827029,LastTimestamp:2026-01-14 00:58:42.06307564 +0000 UTC m=+3.244827029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 00:58:44.372107 kubelet[2579]: E0114 00:58:44.369807 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:58:44.787385 kubelet[2579]: I0114 00:58:44.785890 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:44.787385 kubelet[2579]: E0114 00:58:44.786404 2579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 14 00:58:45.091378 containerd[1637]: time="2026-01-14T00:58:45.086416413Z" level=info msg="connecting to shim 30210244187778a3ef62b4d76f5b637e566efdf8a3ca8c6d83566dd69f313e2a" address="unix:///run/containerd/s/f10b398e908ea42dea02587c27404607f788423892e7c6f43d4d1ac206cd5f1c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:45.102850 containerd[1637]: time="2026-01-14T00:58:45.097201576Z" level=info msg="connecting to shim afea32f3a6b86e273791a9fdbc2ebd2572f8b963041b0c4d8179ea092ae610fa" address="unix:///run/containerd/s/47727cfc973f1bc7d5e871567c6ed4bd32a193f01a525b3d0e5f468a9aae472d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:45.131260 containerd[1637]: time="2026-01-14T00:58:45.131200327Z" level=info msg="connecting to shim 3820962ec5d3e9d780ba9a34b6482a291a5bfd38d300d05d6593d4125f6f050f" address="unix:///run/containerd/s/58d1d4655d5ceeea67ad5c318916a436203bb2aaf4de6bdedd9849e1195ea123" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:45.145132 kubelet[2579]: E0114 00:58:45.145086 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:58:45.335917 kubelet[2579]: E0114 00:58:45.335854 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="3.2s" Jan 14 00:58:45.366153 systemd[1]: Started cri-containerd-3820962ec5d3e9d780ba9a34b6482a291a5bfd38d300d05d6593d4125f6f050f.scope - libcontainer container 3820962ec5d3e9d780ba9a34b6482a291a5bfd38d300d05d6593d4125f6f050f. Jan 14 00:58:45.707848 systemd[1]: Started cri-containerd-afea32f3a6b86e273791a9fdbc2ebd2572f8b963041b0c4d8179ea092ae610fa.scope - libcontainer container afea32f3a6b86e273791a9fdbc2ebd2572f8b963041b0c4d8179ea092ae610fa. Jan 14 00:58:45.894407 kubelet[2579]: E0114 00:58:45.889211 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:58:46.159443 kubelet[2579]: E0114 00:58:46.153365 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:58:46.521146 kubelet[2579]: E0114 00:58:46.508049 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:58:46.524118 kubelet[2579]: I0114 00:58:46.522346 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:46.533316 kubelet[2579]: E0114 00:58:46.533156 2579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 14 00:58:46.729205 systemd[1]: Started cri-containerd-30210244187778a3ef62b4d76f5b637e566efdf8a3ca8c6d83566dd69f313e2a.scope - libcontainer container 30210244187778a3ef62b4d76f5b637e566efdf8a3ca8c6d83566dd69f313e2a. Jan 14 00:58:48.122361 kubelet[2579]: E0114 00:58:48.122161 2579 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:58:48.627400 kubelet[2579]: E0114 00:58:48.584279 2579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="6.4s" Jan 14 00:58:48.745751 containerd[1637]: time="2026-01-14T00:58:48.744073835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3820962ec5d3e9d780ba9a34b6482a291a5bfd38d300d05d6593d4125f6f050f\"" Jan 14 00:58:48.895870 kubelet[2579]: E0114 00:58:48.892010 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:48.939000 containerd[1637]: time="2026-01-14T00:58:48.937996373Z" level=info msg="CreateContainer within sandbox \"3820962ec5d3e9d780ba9a34b6482a291a5bfd38d300d05d6593d4125f6f050f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 00:58:49.041134 kubelet[2579]: E0114 00:58:49.039990 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:58:49.069364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134984023.mount: Deactivated successfully. Jan 14 00:58:49.081214 containerd[1637]: time="2026-01-14T00:58:49.080967609Z" level=info msg="Container 663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:49.317450 containerd[1637]: time="2026-01-14T00:58:49.309450018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"afea32f3a6b86e273791a9fdbc2ebd2572f8b963041b0c4d8179ea092ae610fa\"" Jan 14 00:58:49.368424 kubelet[2579]: E0114 00:58:49.362137 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:49.394433 kubelet[2579]: E0114 00:58:49.385328 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:58:49.562952 containerd[1637]: time="2026-01-14T00:58:49.562361726Z" level=info msg="CreateContainer within sandbox \"3820962ec5d3e9d780ba9a34b6482a291a5bfd38d300d05d6593d4125f6f050f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359\"" Jan 14 00:58:49.675037 containerd[1637]: time="2026-01-14T00:58:49.669301915Z" level=info msg="StartContainer for \"663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359\"" Jan 14 00:58:49.677101 containerd[1637]: time="2026-01-14T00:58:49.671339655Z" level=info msg="CreateContainer within sandbox \"afea32f3a6b86e273791a9fdbc2ebd2572f8b963041b0c4d8179ea092ae610fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 00:58:49.688975 containerd[1637]: time="2026-01-14T00:58:49.688340234Z" level=info msg="connecting to shim 663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359" address="unix:///run/containerd/s/58d1d4655d5ceeea67ad5c318916a436203bb2aaf4de6bdedd9849e1195ea123" protocol=ttrpc version=3 Jan 14 00:58:49.743294 kubelet[2579]: I0114 00:58:49.742005 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:49.743294 kubelet[2579]: E0114 00:58:49.742899 2579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jan 14 00:58:49.769859 containerd[1637]: time="2026-01-14T00:58:49.769804140Z" level=info msg="Container 4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:49.839153 containerd[1637]: time="2026-01-14T00:58:49.839101606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6061e99ba5868730e624ac4fc598fefe,Namespace:kube-system,Attempt:0,} returns sandbox id \"30210244187778a3ef62b4d76f5b637e566efdf8a3ca8c6d83566dd69f313e2a\"" Jan 14 00:58:49.854818 kubelet[2579]: E0114 00:58:49.854781 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:49.855378 containerd[1637]: time="2026-01-14T00:58:49.855345754Z" level=info msg="CreateContainer within sandbox \"afea32f3a6b86e273791a9fdbc2ebd2572f8b963041b0c4d8179ea092ae610fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42\"" Jan 14 00:58:49.861377 containerd[1637]: time="2026-01-14T00:58:49.861344036Z" level=info msg="StartContainer for \"4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42\"" Jan 14 00:58:49.874360 containerd[1637]: time="2026-01-14T00:58:49.874318096Z" level=info msg="CreateContainer within sandbox \"30210244187778a3ef62b4d76f5b637e566efdf8a3ca8c6d83566dd69f313e2a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 00:58:49.892064 containerd[1637]: time="2026-01-14T00:58:49.890457619Z" level=info msg="connecting to shim 4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42" address="unix:///run/containerd/s/47727cfc973f1bc7d5e871567c6ed4bd32a193f01a525b3d0e5f468a9aae472d" protocol=ttrpc version=3 Jan 14 00:58:50.023016 containerd[1637]: time="2026-01-14T00:58:50.020240612Z" level=info msg="Container 187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:50.023146 systemd[1]: Started cri-containerd-663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359.scope - libcontainer container 663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359. Jan 14 00:58:50.090069 containerd[1637]: time="2026-01-14T00:58:50.090011516Z" level=info msg="CreateContainer within sandbox \"30210244187778a3ef62b4d76f5b637e566efdf8a3ca8c6d83566dd69f313e2a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5\"" Jan 14 00:58:50.099975 containerd[1637]: time="2026-01-14T00:58:50.099909606Z" level=info msg="StartContainer for \"187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5\"" Jan 14 00:58:50.117980 containerd[1637]: time="2026-01-14T00:58:50.116261996Z" level=info msg="connecting to shim 187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5" address="unix:///run/containerd/s/f10b398e908ea42dea02587c27404607f788423892e7c6f43d4d1ac206cd5f1c" protocol=ttrpc version=3 Jan 14 00:58:50.124065 systemd[1]: Started cri-containerd-4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42.scope - libcontainer container 4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42. Jan 14 00:58:50.578318 systemd[1]: Started cri-containerd-187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5.scope - libcontainer container 187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5. Jan 14 00:58:51.285168 kubelet[2579]: E0114 00:58:51.280143 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:58:51.590890 containerd[1637]: time="2026-01-14T00:58:51.573979365Z" level=info msg="StartContainer for \"663e11ac0a3bda496ac26286b0a13f6fe99c9984ef5d5a3bee56a5dabdd58359\" returns successfully" Jan 14 00:58:52.088191 containerd[1637]: time="2026-01-14T00:58:52.081356646Z" level=info msg="StartContainer for \"4c6c25bf7b891d9f7f329bc89576921d53fb765628c87ef2dbbdc8f2f4275a42\" returns successfully" Jan 14 00:58:52.206102 containerd[1637]: time="2026-01-14T00:58:52.205440647Z" level=info msg="StartContainer for \"187991b88724e28dc3552f6899c5a5ec44a99fe0c285fc9433c12a28951ab3b5\" returns successfully" Jan 14 00:58:52.374959 kubelet[2579]: E0114 00:58:52.374925 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:52.377279 kubelet[2579]: E0114 00:58:52.376000 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:52.377279 kubelet[2579]: E0114 00:58:52.377021 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:52.377279 kubelet[2579]: E0114 00:58:52.377136 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:52.399951 kubelet[2579]: E0114 00:58:52.399424 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:52.407331 kubelet[2579]: E0114 00:58:52.407275 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:52.490004 kubelet[2579]: E0114 00:58:52.489109 2579 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:58:53.198988 kubelet[2579]: E0114 00:58:53.198349 2579 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 00:58:53.407886 kubelet[2579]: E0114 00:58:53.405271 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:53.414294 kubelet[2579]: E0114 00:58:53.409940 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:53.414294 kubelet[2579]: E0114 00:58:53.412859 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:53.414294 kubelet[2579]: E0114 00:58:53.412948 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:53.423188 update_engine[1614]: I20260114 00:58:53.416859 1614 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:58:53.423188 update_engine[1614]: I20260114 00:58:53.416960 1614 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:58:53.424302 kubelet[2579]: E0114 00:58:53.417936 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:53.424302 kubelet[2579]: E0114 00:58:53.418969 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:53.427181 update_engine[1614]: I20260114 00:58:53.427138 1614 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:58:53.438866 update_engine[1614]: E20260114 00:58:53.437343 1614 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:58:53.439092 update_engine[1614]: I20260114 00:58:53.439033 1614 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 00:58:54.427265 kubelet[2579]: E0114 00:58:54.427107 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:54.428445 kubelet[2579]: E0114 00:58:54.427432 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:54.442169 kubelet[2579]: E0114 00:58:54.442032 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:54.442440 kubelet[2579]: E0114 00:58:54.442298 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:55.443235 kubelet[2579]: E0114 00:58:55.442988 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:55.443235 kubelet[2579]: E0114 00:58:55.443150 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:56.156059 kubelet[2579]: I0114 00:58:56.149145 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:58:56.855219 kubelet[2579]: E0114 00:58:56.855154 2579 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 00:58:56.857028 kubelet[2579]: E0114 00:58:56.856404 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:58:58.677813 kubelet[2579]: E0114 00:58:58.677768 2579 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 00:58:58.789241 kubelet[2579]: E0114 00:58:58.788163 2579 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188a73115a4e8938 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 00:58:42.06307564 +0000 UTC m=+3.244827029,LastTimestamp:2026-01-14 00:58:42.06307564 +0000 UTC m=+3.244827029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 00:58:58.873892 kubelet[2579]: I0114 00:58:58.873850 2579 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 00:58:58.874334 kubelet[2579]: E0114 00:58:58.874056 2579 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 14 00:58:58.948197 kubelet[2579]: E0114 00:58:58.947205 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.050336 kubelet[2579]: E0114 00:58:59.050277 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.151444 kubelet[2579]: E0114 00:58:59.151142 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.254241 kubelet[2579]: E0114 00:58:59.254103 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.358082 kubelet[2579]: E0114 00:58:59.358021 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.459139 kubelet[2579]: E0114 00:58:59.459092 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.561736 kubelet[2579]: E0114 00:58:59.561008 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.661902 kubelet[2579]: E0114 00:58:59.661848 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.764124 kubelet[2579]: E0114 00:58:59.763064 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.869809 kubelet[2579]: E0114 00:58:59.869764 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:58:59.971036 kubelet[2579]: E0114 00:58:59.970997 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.072058 kubelet[2579]: E0114 00:59:00.072014 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.180450 kubelet[2579]: E0114 00:59:00.174029 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.277132 kubelet[2579]: E0114 00:59:00.276312 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.379409 kubelet[2579]: E0114 00:59:00.379345 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.480901 kubelet[2579]: E0114 00:59:00.480059 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.580848 kubelet[2579]: E0114 00:59:00.580796 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.681121 kubelet[2579]: E0114 00:59:00.681066 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.782320 kubelet[2579]: E0114 00:59:00.781233 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.884442 kubelet[2579]: E0114 00:59:00.883955 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:00.995442 kubelet[2579]: E0114 00:59:00.993323 2579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 00:59:01.117046 kubelet[2579]: I0114 00:59:01.117016 2579 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 00:59:01.175343 kubelet[2579]: I0114 00:59:01.175298 2579 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:01.215398 kubelet[2579]: I0114 00:59:01.215358 2579 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 00:59:01.739213 kubelet[2579]: I0114 00:59:01.739180 2579 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:01.804095 kubelet[2579]: E0114 00:59:01.800398 2579 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:01.808008 kubelet[2579]: E0114 00:59:01.807061 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:02.007317 kubelet[2579]: I0114 00:59:02.005011 2579 apiserver.go:52] "Watching apiserver" Jan 14 00:59:02.035976 kubelet[2579]: E0114 00:59:02.035241 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:02.036966 kubelet[2579]: E0114 00:59:02.036451 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:02.098425 kubelet[2579]: I0114 00:59:02.098392 2579 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:59:02.549950 kubelet[2579]: E0114 00:59:02.549234 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:02.800426 kubelet[2579]: I0114 00:59:02.795454 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.795301297 podStartE2EDuration="1.795301297s" podCreationTimestamp="2026-01-14 00:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:02.793343421 +0000 UTC m=+23.975094820" watchObservedRunningTime="2026-01-14 00:59:02.795301297 +0000 UTC m=+23.977052697" Jan 14 00:59:02.883258 kubelet[2579]: I0114 00:59:02.883090 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.883069768 podStartE2EDuration="1.883069768s" podCreationTimestamp="2026-01-14 00:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:02.839077925 +0000 UTC m=+24.020829303" watchObservedRunningTime="2026-01-14 00:59:02.883069768 +0000 UTC m=+24.064821147" Jan 14 00:59:03.428177 update_engine[1614]: I20260114 00:59:03.427129 1614 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:59:03.428177 update_engine[1614]: I20260114 00:59:03.427214 1614 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:59:03.428177 update_engine[1614]: I20260114 00:59:03.428071 1614 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:59:03.447944 update_engine[1614]: E20260114 00:59:03.447157 1614 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:59:03.447944 update_engine[1614]: I20260114 00:59:03.447245 1614 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 00:59:03.447944 update_engine[1614]: I20260114 00:59:03.447256 1614 omaha_request_action.cc:617] Omaha request response: Jan 14 00:59:03.447944 update_engine[1614]: E20260114 00:59:03.447343 1614 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.447999 1614 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448010 1614 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448016 1614 update_attempter.cc:306] Processing Done. Jan 14 00:59:03.450118 update_engine[1614]: E20260114 00:59:03.448030 1614 update_attempter.cc:619] Update failed. Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448037 1614 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448043 1614 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448050 1614 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448242 1614 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448275 1614 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448287 1614 omaha_request_action.cc:272] Request: Jan 14 00:59:03.450118 update_engine[1614]: Jan 14 00:59:03.450118 update_engine[1614]: Jan 14 00:59:03.450118 update_engine[1614]: Jan 14 00:59:03.450118 update_engine[1614]: Jan 14 00:59:03.450118 update_engine[1614]: Jan 14 00:59:03.450118 update_engine[1614]: Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448298 1614 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:59:03.450118 update_engine[1614]: I20260114 00:59:03.448329 1614 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:59:03.457355 update_engine[1614]: I20260114 00:59:03.455044 1614 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:59:03.464099 locksmithd[1683]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 00:59:03.473430 update_engine[1614]: E20260114 00:59:03.473045 1614 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473145 1614 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473161 1614 omaha_request_action.cc:617] Omaha request response: Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473173 1614 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473181 1614 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473191 1614 update_attempter.cc:306] Processing Done. Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473201 1614 update_attempter.cc:310] Error event sent. Jan 14 00:59:03.473430 update_engine[1614]: I20260114 00:59:03.473212 1614 update_check_scheduler.cc:74] Next update check in 40m33s Jan 14 00:59:03.476186 locksmithd[1683]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 00:59:04.990062 systemd[1]: Reload requested from client PID 2868 ('systemctl') (unit session-8.scope)... Jan 14 00:59:04.990236 systemd[1]: Reloading... Jan 14 00:59:05.016117 kubelet[2579]: E0114 00:59:05.015118 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:05.114975 kubelet[2579]: I0114 00:59:05.114374 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.114356401 podStartE2EDuration="4.114356401s" podCreationTimestamp="2026-01-14 00:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:02.887037443 +0000 UTC m=+24.068788842" watchObservedRunningTime="2026-01-14 00:59:05.114356401 +0000 UTC m=+26.296107780" Jan 14 00:59:05.490976 zram_generator::config[2914]: No configuration found. Jan 14 00:59:09.426383 systemd[1]: Reloading finished in 4420 ms. Jan 14 00:59:10.200341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:10.349066 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 00:59:10.351106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:10.351323 systemd[1]: kubelet.service: Consumed 8.575s CPU time, 133.8M memory peak. Jan 14 00:59:10.372002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:26.628063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:26.679285 (kubelet)[2960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:59:27.386256 kubelet[2960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:59:27.388090 kubelet[2960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:59:27.388090 kubelet[2960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:59:27.391076 kubelet[2960]: I0114 00:59:27.391032 2960 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:59:27.521053 kubelet[2960]: I0114 00:59:27.520084 2960 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 00:59:27.521053 kubelet[2960]: I0114 00:59:27.520267 2960 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:59:27.522015 kubelet[2960]: I0114 00:59:27.521291 2960 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:59:27.538216 kubelet[2960]: I0114 00:59:27.538187 2960 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 00:59:27.826063 kubelet[2960]: I0114 00:59:27.821405 2960 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:59:27.953136 kubelet[2960]: I0114 00:59:27.953094 2960 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:59:28.031368 kubelet[2960]: I0114 00:59:28.031330 2960 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:59:28.044409 kubelet[2960]: I0114 00:59:28.044369 2960 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:59:28.046005 kubelet[2960]: I0114 00:59:28.045045 2960 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:59:28.049177 kubelet[2960]: I0114 00:59:28.047892 2960 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:59:28.049177 kubelet[2960]: I0114 00:59:28.047908 2960 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 00:59:28.049177 kubelet[2960]: I0114 00:59:28.048281 2960 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:59:28.059452 kubelet[2960]: I0114 00:59:28.059429 2960 kubelet.go:480] "Attempting to sync node with API server" Jan 14 00:59:28.060159 kubelet[2960]: I0114 00:59:28.060143 2960 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:59:28.063201 kubelet[2960]: I0114 00:59:28.063182 2960 kubelet.go:386] "Adding apiserver pod source" Jan 14 00:59:28.063325 kubelet[2960]: I0114 00:59:28.063309 2960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:59:28.099045 kubelet[2960]: I0114 00:59:28.095393 2960 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:59:28.126053 kubelet[2960]: I0114 00:59:28.126017 2960 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:59:28.343406 kubelet[2960]: I0114 00:59:28.343366 2960 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:59:28.346221 kubelet[2960]: I0114 00:59:28.346204 2960 server.go:1289] "Started kubelet" Jan 14 00:59:28.387163 kubelet[2960]: I0114 00:59:28.350259 2960 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:59:28.402335 kubelet[2960]: I0114 00:59:28.381029 2960 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:59:28.402335 kubelet[2960]: I0114 00:59:28.402070 2960 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:59:28.406447 kubelet[2960]: I0114 00:59:28.404272 2960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:59:28.415028 kubelet[2960]: I0114 00:59:28.411289 2960 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:59:28.415028 kubelet[2960]: I0114 00:59:28.414184 2960 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:59:28.435048 kubelet[2960]: I0114 00:59:28.432104 2960 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:59:28.435240 kubelet[2960]: I0114 00:59:28.435224 2960 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:59:28.485229 kubelet[2960]: I0114 00:59:28.485191 2960 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:59:28.536138 kubelet[2960]: I0114 00:59:28.491235 2960 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:59:28.550068 kubelet[2960]: I0114 00:59:28.547135 2960 server.go:317] "Adding debug handlers to kubelet server" Jan 14 00:59:28.550215 kubelet[2960]: E0114 00:59:28.550178 2960 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:59:28.562222 kubelet[2960]: I0114 00:59:28.561235 2960 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:59:29.067199 kubelet[2960]: I0114 00:59:29.067172 2960 apiserver.go:52] "Watching apiserver" Jan 14 00:59:29.611296 kubelet[2960]: I0114 00:59:29.592400 2960 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.650397 2960 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.650423 2960 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.650446 2960 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.651181 2960 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.651196 2960 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.651222 2960 policy_none.go:49] "None policy: Start" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.652054 2960 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.652132 2960 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:59:29.655081 kubelet[2960]: I0114 00:59:29.652260 2960 state_mem.go:75] "Updated machine memory state" Jan 14 00:59:29.679175 kubelet[2960]: I0114 00:59:29.679137 2960 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 00:59:29.679388 kubelet[2960]: I0114 00:59:29.679373 2960 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 00:59:29.680083 kubelet[2960]: I0114 00:59:29.680059 2960 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:59:29.680321 kubelet[2960]: I0114 00:59:29.680304 2960 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 00:59:29.683271 kubelet[2960]: E0114 00:59:29.681194 2960 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:59:29.789101 kubelet[2960]: E0114 00:59:29.789065 2960 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 00:59:29.802858 kubelet[2960]: E0114 00:59:29.793922 2960 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:59:29.803360 kubelet[2960]: I0114 00:59:29.803338 2960 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:59:29.808182 kubelet[2960]: I0114 00:59:29.808127 2960 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:59:29.823292 kubelet[2960]: E0114 00:59:29.823264 2960 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:59:29.834035 kubelet[2960]: I0114 00:59:29.833273 2960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:59:29.853995 kubelet[2960]: I0114 00:59:29.852357 2960 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 00:59:29.879118 containerd[1637]: time="2026-01-14T00:59:29.873847116Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 00:59:29.897425 kubelet[2960]: I0114 00:59:29.887145 2960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 00:59:30.063003 kubelet[2960]: I0114 00:59:30.059050 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6061e99ba5868730e624ac4fc598fefe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6061e99ba5868730e624ac4fc598fefe\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:30.066186 kubelet[2960]: I0114 00:59:30.066156 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6061e99ba5868730e624ac4fc598fefe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6061e99ba5868730e624ac4fc598fefe\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:30.074014 kubelet[2960]: I0114 00:59:30.068081 2960 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:59:30.074014 kubelet[2960]: I0114 00:59:30.063205 2960 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:30.143308 kubelet[2960]: I0114 00:59:30.142325 2960 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 00:59:30.169434 kubelet[2960]: I0114 00:59:30.169392 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 00:59:30.182351 kubelet[2960]: I0114 00:59:30.182325 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eeaba8c1-0b32-4c39-b720-05e5292fbbd0-kube-proxy\") pod \"kube-proxy-2z6gb\" (UID: \"eeaba8c1-0b32-4c39-b720-05e5292fbbd0\") " pod="kube-system/kube-proxy-2z6gb" Jan 14 00:59:30.183203 kubelet[2960]: I0114 00:59:30.183173 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:59:30.187402 kubelet[2960]: I0114 00:59:30.187373 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeaba8c1-0b32-4c39-b720-05e5292fbbd0-xtables-lock\") pod \"kube-proxy-2z6gb\" (UID: \"eeaba8c1-0b32-4c39-b720-05e5292fbbd0\") " pod="kube-system/kube-proxy-2z6gb" Jan 14 00:59:30.194300 kubelet[2960]: I0114 00:59:30.193072 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeaba8c1-0b32-4c39-b720-05e5292fbbd0-lib-modules\") pod \"kube-proxy-2z6gb\" (UID: \"eeaba8c1-0b32-4c39-b720-05e5292fbbd0\") " pod="kube-system/kube-proxy-2z6gb" Jan 14 00:59:30.194300 kubelet[2960]: I0114 00:59:30.193264 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w86f\" (UniqueName: \"kubernetes.io/projected/eeaba8c1-0b32-4c39-b720-05e5292fbbd0-kube-api-access-2w86f\") pod \"kube-proxy-2z6gb\" (UID: \"eeaba8c1-0b32-4c39-b720-05e5292fbbd0\") " pod="kube-system/kube-proxy-2z6gb" Jan 14 00:59:30.194300 kubelet[2960]: I0114 00:59:30.193293 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6061e99ba5868730e624ac4fc598fefe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6061e99ba5868730e624ac4fc598fefe\") " pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:30.194300 kubelet[2960]: I0114 00:59:30.193309 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:59:30.194300 kubelet[2960]: I0114 00:59:30.193326 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:59:30.197188 kubelet[2960]: I0114 00:59:30.193351 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:59:30.197188 kubelet[2960]: I0114 00:59:30.193373 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 00:59:30.205129 systemd[1]: Created slice kubepods-besteffort-podeeaba8c1_0b32_4c39_b720_05e5292fbbd0.slice - libcontainer container kubepods-besteffort-podeeaba8c1_0b32_4c39_b720_05e5292fbbd0.slice. Jan 14 00:59:30.440976 kubelet[2960]: E0114 00:59:30.428380 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:30.440976 kubelet[2960]: E0114 00:59:30.433131 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:30.508305 kubelet[2960]: E0114 00:59:30.508261 2960 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 00:59:30.516224 kubelet[2960]: E0114 00:59:30.516196 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:30.557260 kubelet[2960]: I0114 00:59:30.545451 2960 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 00:59:30.567040 kubelet[2960]: I0114 00:59:30.556454 2960 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 00:59:30.970952 kubelet[2960]: E0114 00:59:30.970244 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:31.003449 containerd[1637]: time="2026-01-14T00:59:31.003252741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2z6gb,Uid:eeaba8c1-0b32-4c39-b720-05e5292fbbd0,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:31.134457 kubelet[2960]: E0114 00:59:31.131269 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:31.164368 kubelet[2960]: E0114 00:59:31.157398 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:31.164368 kubelet[2960]: E0114 00:59:31.159178 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:31.664814 containerd[1637]: time="2026-01-14T00:59:31.663434371Z" level=info msg="connecting to shim 5e143a24e577a28edf78ca59f33caf2d56956aca7a528292a6c9d296543e8983" address="unix:///run/containerd/s/acc84f17b05eb9ed8202b17236825a64f565aef97374bb80f846a282d18c0899" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:31.984330 systemd[1]: Started cri-containerd-5e143a24e577a28edf78ca59f33caf2d56956aca7a528292a6c9d296543e8983.scope - libcontainer container 5e143a24e577a28edf78ca59f33caf2d56956aca7a528292a6c9d296543e8983. Jan 14 00:59:32.110717 kubelet[2960]: E0114 00:59:32.110132 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:32.114364 kubelet[2960]: E0114 00:59:32.114250 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:32.119716 kubelet[2960]: E0114 00:59:32.119076 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:32.187870 containerd[1637]: time="2026-01-14T00:59:32.187423614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2z6gb,Uid:eeaba8c1-0b32-4c39-b720-05e5292fbbd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e143a24e577a28edf78ca59f33caf2d56956aca7a528292a6c9d296543e8983\"" Jan 14 00:59:32.194071 kubelet[2960]: E0114 00:59:32.194039 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:32.225131 containerd[1637]: time="2026-01-14T00:59:32.225000047Z" level=info msg="CreateContainer within sandbox \"5e143a24e577a28edf78ca59f33caf2d56956aca7a528292a6c9d296543e8983\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 00:59:32.297301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849400994.mount: Deactivated successfully. Jan 14 00:59:32.305803 containerd[1637]: time="2026-01-14T00:59:32.304110571Z" level=info msg="Container 9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:32.307899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888118654.mount: Deactivated successfully. Jan 14 00:59:32.343424 containerd[1637]: time="2026-01-14T00:59:32.343371969Z" level=info msg="CreateContainer within sandbox \"5e143a24e577a28edf78ca59f33caf2d56956aca7a528292a6c9d296543e8983\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384\"" Jan 14 00:59:32.346339 containerd[1637]: time="2026-01-14T00:59:32.346198718Z" level=info msg="StartContainer for \"9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384\"" Jan 14 00:59:32.358749 containerd[1637]: time="2026-01-14T00:59:32.358440387Z" level=info msg="connecting to shim 9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384" address="unix:///run/containerd/s/acc84f17b05eb9ed8202b17236825a64f565aef97374bb80f846a282d18c0899" protocol=ttrpc version=3 Jan 14 00:59:32.456178 systemd[1]: Started cri-containerd-9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384.scope - libcontainer container 9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384. Jan 14 00:59:32.750300 containerd[1637]: time="2026-01-14T00:59:32.750156338Z" level=info msg="StartContainer for \"9731d694b782d4025c62b459f44a074ee96ac3e86c45ed26795c06adb4618384\" returns successfully" Jan 14 00:59:33.015880 systemd[1]: Created slice kubepods-burstable-pod71d773c5_7cc7_42d1_beda_a0291d107216.slice - libcontainer container kubepods-burstable-pod71d773c5_7cc7_42d1_beda_a0291d107216.slice. Jan 14 00:59:33.107713 kubelet[2960]: I0114 00:59:33.107396 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71d773c5-7cc7-42d1-beda-a0291d107216-xtables-lock\") pod \"kube-flannel-ds-hqvns\" (UID: \"71d773c5-7cc7-42d1-beda-a0291d107216\") " pod="kube-flannel/kube-flannel-ds-hqvns" Jan 14 00:59:33.109968 kubelet[2960]: I0114 00:59:33.107459 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl62v\" (UniqueName: \"kubernetes.io/projected/71d773c5-7cc7-42d1-beda-a0291d107216-kube-api-access-jl62v\") pod \"kube-flannel-ds-hqvns\" (UID: \"71d773c5-7cc7-42d1-beda-a0291d107216\") " pod="kube-flannel/kube-flannel-ds-hqvns" Jan 14 00:59:33.111777 kubelet[2960]: I0114 00:59:33.110293 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/71d773c5-7cc7-42d1-beda-a0291d107216-run\") pod \"kube-flannel-ds-hqvns\" (UID: \"71d773c5-7cc7-42d1-beda-a0291d107216\") " pod="kube-flannel/kube-flannel-ds-hqvns" Jan 14 00:59:33.111777 kubelet[2960]: I0114 00:59:33.110338 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/71d773c5-7cc7-42d1-beda-a0291d107216-cni-plugin\") pod \"kube-flannel-ds-hqvns\" (UID: \"71d773c5-7cc7-42d1-beda-a0291d107216\") " pod="kube-flannel/kube-flannel-ds-hqvns" Jan 14 00:59:33.111777 kubelet[2960]: I0114 00:59:33.110365 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/71d773c5-7cc7-42d1-beda-a0291d107216-cni\") pod \"kube-flannel-ds-hqvns\" (UID: \"71d773c5-7cc7-42d1-beda-a0291d107216\") " pod="kube-flannel/kube-flannel-ds-hqvns" Jan 14 00:59:33.111777 kubelet[2960]: I0114 00:59:33.110387 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/71d773c5-7cc7-42d1-beda-a0291d107216-flannel-cfg\") pod \"kube-flannel-ds-hqvns\" (UID: \"71d773c5-7cc7-42d1-beda-a0291d107216\") " pod="kube-flannel/kube-flannel-ds-hqvns" Jan 14 00:59:33.135588 kubelet[2960]: E0114 00:59:33.135127 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:33.139223 kubelet[2960]: E0114 00:59:33.138451 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:33.148756 kubelet[2960]: E0114 00:59:33.142316 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:33.341452 kubelet[2960]: E0114 00:59:33.341001 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:33.346226 containerd[1637]: time="2026-01-14T00:59:33.345836112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hqvns,Uid:71d773c5-7cc7-42d1-beda-a0291d107216,Namespace:kube-flannel,Attempt:0,}" Jan 14 00:59:33.440728 containerd[1637]: time="2026-01-14T00:59:33.440316364Z" level=info msg="connecting to shim de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de" address="unix:///run/containerd/s/ba09094fed8716e69f58547a4938149ad977b42ab0a84a7b7b4c3632cbf98b8e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:33.488110 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 14 00:59:33.498461 sshd[1816]: Connection closed by 10.0.0.1 port 59266 Jan 14 00:59:33.501805 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:33.515259 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:59266.service: Deactivated successfully. Jan 14 00:59:33.528393 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 00:59:33.529189 systemd[1]: session-8.scope: Consumed 28.515s CPU time, 210.2M memory peak. Jan 14 00:59:33.541036 systemd-logind[1605]: Session 8 logged out. Waiting for processes to exit. Jan 14 00:59:33.550217 systemd-logind[1605]: Removed session 8. Jan 14 00:59:33.580114 systemd[1]: Started cri-containerd-de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de.scope - libcontainer container de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de. Jan 14 00:59:33.823768 containerd[1637]: time="2026-01-14T00:59:33.823205314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hqvns,Uid:71d773c5-7cc7-42d1-beda-a0291d107216,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\"" Jan 14 00:59:33.826388 kubelet[2960]: E0114 00:59:33.825853 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:33.841591 containerd[1637]: time="2026-01-14T00:59:33.841234226Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 14 00:59:34.183996 kubelet[2960]: E0114 00:59:34.183296 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:34.929205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748258611.mount: Deactivated successfully. Jan 14 00:59:35.078754 containerd[1637]: time="2026-01-14T00:59:35.078354513Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:35.082199 containerd[1637]: time="2026-01-14T00:59:35.082164557Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=3641610" Jan 14 00:59:35.087277 containerd[1637]: time="2026-01-14T00:59:35.086797524Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:35.095337 containerd[1637]: time="2026-01-14T00:59:35.094921644Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:35.097023 containerd[1637]: time="2026-01-14T00:59:35.096359536Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.255080205s" Jan 14 00:59:35.097023 containerd[1637]: time="2026-01-14T00:59:35.096387017Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 14 00:59:35.109242 containerd[1637]: time="2026-01-14T00:59:35.109121431Z" level=info msg="CreateContainer within sandbox \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 14 00:59:35.137271 containerd[1637]: time="2026-01-14T00:59:35.135938527Z" level=info msg="Container febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:35.156572 containerd[1637]: time="2026-01-14T00:59:35.156182202Z" level=info msg="CreateContainer within sandbox \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393\"" Jan 14 00:59:35.161309 containerd[1637]: time="2026-01-14T00:59:35.161012132Z" level=info msg="StartContainer for \"febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393\"" Jan 14 00:59:35.163862 containerd[1637]: time="2026-01-14T00:59:35.163832703Z" level=info msg="connecting to shim febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393" address="unix:///run/containerd/s/ba09094fed8716e69f58547a4938149ad977b42ab0a84a7b7b4c3632cbf98b8e" protocol=ttrpc version=3 Jan 14 00:59:35.239304 systemd[1]: Started cri-containerd-febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393.scope - libcontainer container febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393. Jan 14 00:59:35.389898 containerd[1637]: time="2026-01-14T00:59:35.388620228Z" level=info msg="StartContainer for \"febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393\" returns successfully" Jan 14 00:59:35.388948 systemd[1]: cri-containerd-febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393.scope: Deactivated successfully. Jan 14 00:59:35.397335 containerd[1637]: time="2026-01-14T00:59:35.396890079Z" level=info msg="received container exit event container_id:\"febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393\" id:\"febadf8b6383572ab3c64bf5ccb5cd4abc31b5a9f6f1f985cbf11411bf1e3393\" pid:3322 exited_at:{seconds:1768352375 nanos:394884324}" Jan 14 00:59:36.228348 kubelet[2960]: E0114 00:59:36.228095 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:36.233125 containerd[1637]: time="2026-01-14T00:59:36.232949959Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 14 00:59:36.270801 kubelet[2960]: I0114 00:59:36.269799 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2z6gb" podStartSLOduration=7.269777906 podStartE2EDuration="7.269777906s" podCreationTimestamp="2026-01-14 00:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:33.310887442 +0000 UTC m=+6.507601849" watchObservedRunningTime="2026-01-14 00:59:36.269777906 +0000 UTC m=+9.466492314" Jan 14 00:59:38.271270 kubelet[2960]: E0114 00:59:38.271050 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:39.249983 kubelet[2960]: E0114 00:59:39.249409 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:41.246274 containerd[1637]: time="2026-01-14T00:59:41.246211506Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:41.250185 containerd[1637]: time="2026-01-14T00:59:41.250145591Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=26946120" Jan 14 00:59:41.255066 containerd[1637]: time="2026-01-14T00:59:41.254925003Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:41.265205 containerd[1637]: time="2026-01-14T00:59:41.264861494Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:41.267953 containerd[1637]: time="2026-01-14T00:59:41.267907698Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 5.03490548s" Jan 14 00:59:41.268183 containerd[1637]: time="2026-01-14T00:59:41.268077266Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 14 00:59:41.297879 containerd[1637]: time="2026-01-14T00:59:41.295370867Z" level=info msg="CreateContainer within sandbox \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 00:59:41.376621 containerd[1637]: time="2026-01-14T00:59:41.376324173Z" level=info msg="Container ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:41.429693 containerd[1637]: time="2026-01-14T00:59:41.429156594Z" level=info msg="CreateContainer within sandbox \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603\"" Jan 14 00:59:41.437223 containerd[1637]: time="2026-01-14T00:59:41.436157877Z" level=info msg="StartContainer for \"ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603\"" Jan 14 00:59:41.441031 containerd[1637]: time="2026-01-14T00:59:41.440454218Z" level=info msg="connecting to shim ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603" address="unix:///run/containerd/s/ba09094fed8716e69f58547a4938149ad977b42ab0a84a7b7b4c3632cbf98b8e" protocol=ttrpc version=3 Jan 14 00:59:41.533809 systemd[1]: Started cri-containerd-ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603.scope - libcontainer container ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603. Jan 14 00:59:41.803410 systemd[1]: cri-containerd-ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603.scope: Deactivated successfully. Jan 14 00:59:41.818350 containerd[1637]: time="2026-01-14T00:59:41.818186241Z" level=info msg="received container exit event container_id:\"ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603\" id:\"ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603\" pid:3396 exited_at:{seconds:1768352381 nanos:803868807}" Jan 14 00:59:41.830285 kubelet[2960]: I0114 00:59:41.830088 2960 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 00:59:41.888006 containerd[1637]: time="2026-01-14T00:59:41.887699724Z" level=info msg="StartContainer for \"ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603\" returns successfully" Jan 14 00:59:42.043807 systemd[1]: Created slice kubepods-burstable-pod5ba01c42_8d13_4bd5_bf19_740a1d8b7c6a.slice - libcontainer container kubepods-burstable-pod5ba01c42_8d13_4bd5_bf19_740a1d8b7c6a.slice. Jan 14 00:59:42.076067 kubelet[2960]: I0114 00:59:42.056897 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a-config-volume\") pod \"coredns-674b8bbfcf-j52hn\" (UID: \"5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a\") " pod="kube-system/coredns-674b8bbfcf-j52hn" Jan 14 00:59:42.076067 kubelet[2960]: I0114 00:59:42.057034 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfhk\" (UniqueName: \"kubernetes.io/projected/5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a-kube-api-access-7kfhk\") pod \"coredns-674b8bbfcf-j52hn\" (UID: \"5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a\") " pod="kube-system/coredns-674b8bbfcf-j52hn" Jan 14 00:59:42.076067 kubelet[2960]: I0114 00:59:42.057075 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74a81910-81e5-4fcb-bfc6-04e11b3a78b8-config-volume\") pod \"coredns-674b8bbfcf-lqclm\" (UID: \"74a81910-81e5-4fcb-bfc6-04e11b3a78b8\") " pod="kube-system/coredns-674b8bbfcf-lqclm" Jan 14 00:59:42.076067 kubelet[2960]: I0114 00:59:42.057103 2960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlm7\" (UniqueName: \"kubernetes.io/projected/74a81910-81e5-4fcb-bfc6-04e11b3a78b8-kube-api-access-tjlm7\") pod \"coredns-674b8bbfcf-lqclm\" (UID: \"74a81910-81e5-4fcb-bfc6-04e11b3a78b8\") " pod="kube-system/coredns-674b8bbfcf-lqclm" Jan 14 00:59:42.064367 systemd[1]: Created slice kubepods-burstable-pod74a81910_81e5_4fcb_bfc6_04e11b3a78b8.slice - libcontainer container kubepods-burstable-pod74a81910_81e5_4fcb_bfc6_04e11b3a78b8.slice. Jan 14 00:59:42.123849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee21cc7097464af5401b71d3f416e7bd2877fd836874ad49f6aa827495458603-rootfs.mount: Deactivated successfully. Jan 14 00:59:42.334985 kubelet[2960]: E0114 00:59:42.333388 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:42.344833 containerd[1637]: time="2026-01-14T00:59:42.344435191Z" level=info msg="CreateContainer within sandbox \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 14 00:59:42.368824 kubelet[2960]: E0114 00:59:42.367399 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:42.373016 containerd[1637]: time="2026-01-14T00:59:42.372859046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j52hn,Uid:5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:42.395227 kubelet[2960]: E0114 00:59:42.394947 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:42.400158 containerd[1637]: time="2026-01-14T00:59:42.399835711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lqclm,Uid:74a81910-81e5-4fcb-bfc6-04e11b3a78b8,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:42.448303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826708796.mount: Deactivated successfully. Jan 14 00:59:42.471650 containerd[1637]: time="2026-01-14T00:59:42.469177271Z" level=info msg="Container 7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:42.540144 containerd[1637]: time="2026-01-14T00:59:42.540094194Z" level=info msg="CreateContainer within sandbox \"de743de396135eb17a36ece952fcece18572c3a0c581dcddfaffad3b0b89e4de\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef\"" Jan 14 00:59:42.544381 containerd[1637]: time="2026-01-14T00:59:42.544272763Z" level=info msg="StartContainer for \"7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef\"" Jan 14 00:59:42.551032 containerd[1637]: time="2026-01-14T00:59:42.550992478Z" level=info msg="connecting to shim 7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef" address="unix:///run/containerd/s/ba09094fed8716e69f58547a4938149ad977b42ab0a84a7b7b4c3632cbf98b8e" protocol=ttrpc version=3 Jan 14 00:59:42.619961 containerd[1637]: time="2026-01-14T00:59:42.617361805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lqclm,Uid:74a81910-81e5-4fcb-bfc6-04e11b3a78b8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999c09b7606c1fcf5714ac9786b28d05a6f7baed479f1fc536bfc7531100c03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:59:42.621132 kubelet[2960]: E0114 00:59:42.620654 2960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999c09b7606c1fcf5714ac9786b28d05a6f7baed479f1fc536bfc7531100c03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:59:42.621132 kubelet[2960]: E0114 00:59:42.620833 2960 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999c09b7606c1fcf5714ac9786b28d05a6f7baed479f1fc536bfc7531100c03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-lqclm" Jan 14 00:59:42.621132 kubelet[2960]: E0114 00:59:42.620865 2960 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999c09b7606c1fcf5714ac9786b28d05a6f7baed479f1fc536bfc7531100c03\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-lqclm" Jan 14 00:59:42.621132 kubelet[2960]: E0114 00:59:42.621004 2960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lqclm_kube-system(74a81910-81e5-4fcb-bfc6-04e11b3a78b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lqclm_kube-system(74a81910-81e5-4fcb-bfc6-04e11b3a78b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6999c09b7606c1fcf5714ac9786b28d05a6f7baed479f1fc536bfc7531100c03\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-lqclm" podUID="74a81910-81e5-4fcb-bfc6-04e11b3a78b8" Jan 14 00:59:42.640680 containerd[1637]: time="2026-01-14T00:59:42.640077847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j52hn,Uid:5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7daeab1caf072a3a4f5a0800b8672a2c5fe73b35839809183a877ca6ce8e05\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:59:42.642283 kubelet[2960]: E0114 00:59:42.641879 2960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7daeab1caf072a3a4f5a0800b8672a2c5fe73b35839809183a877ca6ce8e05\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 14 00:59:42.642283 kubelet[2960]: E0114 00:59:42.641938 2960 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7daeab1caf072a3a4f5a0800b8672a2c5fe73b35839809183a877ca6ce8e05\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-j52hn" Jan 14 00:59:42.642283 kubelet[2960]: E0114 00:59:42.641957 2960 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7daeab1caf072a3a4f5a0800b8672a2c5fe73b35839809183a877ca6ce8e05\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-j52hn" Jan 14 00:59:42.642283 kubelet[2960]: E0114 00:59:42.642005 2960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j52hn_kube-system(5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j52hn_kube-system(5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa7daeab1caf072a3a4f5a0800b8672a2c5fe73b35839809183a877ca6ce8e05\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-j52hn" podUID="5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a" Jan 14 00:59:42.687652 systemd[1]: Started cri-containerd-7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef.scope - libcontainer container 7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef. Jan 14 00:59:42.782050 containerd[1637]: time="2026-01-14T00:59:42.781380474Z" level=info msg="StartContainer for \"7ad5a53be64179c8467258ff46d93e2a5d50569be48ccdbebc7e8a40546944ef\" returns successfully" Jan 14 00:59:43.384194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243720824.mount: Deactivated successfully. Jan 14 00:59:43.384336 systemd[1]: run-netns-cni\x2de2bfd4e0\x2d185e\x2dfa02\x2d2739\x2d708b2de7c3f0.mount: Deactivated successfully. Jan 14 00:59:43.457410 kubelet[2960]: E0114 00:59:43.457195 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:43.491202 kubelet[2960]: I0114 00:59:43.491030 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-hqvns" podStartSLOduration=4.053064765 podStartE2EDuration="11.491011722s" podCreationTimestamp="2026-01-14 00:59:32 +0000 UTC" firstStartedPulling="2026-01-14 00:59:33.835226408 +0000 UTC m=+7.031940816" lastFinishedPulling="2026-01-14 00:59:41.273173364 +0000 UTC m=+14.469887773" observedRunningTime="2026-01-14 00:59:43.490916434 +0000 UTC m=+16.687630843" watchObservedRunningTime="2026-01-14 00:59:43.491011722 +0000 UTC m=+16.687726130" Jan 14 00:59:44.462689 kubelet[2960]: E0114 00:59:44.462297 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:44.600055 systemd-networkd[1510]: flannel.1: Link UP Jan 14 00:59:44.600070 systemd-networkd[1510]: flannel.1: Gained carrier Jan 14 00:59:46.202950 systemd-networkd[1510]: flannel.1: Gained IPv6LL Jan 14 00:59:55.683301 kubelet[2960]: E0114 00:59:55.682670 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:55.686405 containerd[1637]: time="2026-01-14T00:59:55.683443771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j52hn,Uid:5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:55.742011 systemd-networkd[1510]: cni0: Link UP Jan 14 00:59:55.742032 systemd-networkd[1510]: cni0: Gained carrier Jan 14 00:59:55.750223 systemd-networkd[1510]: cni0: Lost carrier Jan 14 00:59:55.774729 systemd-networkd[1510]: veth4e212e53: Link UP Jan 14 00:59:55.785191 kernel: cni0: port 1(veth4e212e53) entered blocking state Jan 14 00:59:55.785326 kernel: cni0: port 1(veth4e212e53) entered disabled state Jan 14 00:59:55.798078 kernel: veth4e212e53: entered allmulticast mode Jan 14 00:59:55.806461 kernel: veth4e212e53: entered promiscuous mode Jan 14 00:59:55.841976 kernel: cni0: port 1(veth4e212e53) entered blocking state Jan 14 00:59:55.842097 kernel: cni0: port 1(veth4e212e53) entered forwarding state Jan 14 00:59:55.842089 systemd-networkd[1510]: veth4e212e53: Gained carrier Jan 14 00:59:55.844394 systemd-networkd[1510]: cni0: Gained carrier Jan 14 00:59:55.879701 containerd[1637]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Jan 14 00:59:55.879701 containerd[1637]: delegateAdd: netconf sent to delegate plugin: Jan 14 00:59:55.993233 containerd[1637]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-14T00:59:55.992971548Z" level=info msg="connecting to shim 593a339b2f638b60199a85c70eda4edf0ab4764b4b420e2217835466043e5809" address="unix:///run/containerd/s/bcc3e7bded1ee3b43a3f6ce25f1e0426c8b7521e30207d48842e7374e0f4cf8f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:56.146213 systemd[1]: Started cri-containerd-593a339b2f638b60199a85c70eda4edf0ab4764b4b420e2217835466043e5809.scope - libcontainer container 593a339b2f638b60199a85c70eda4edf0ab4764b4b420e2217835466043e5809. Jan 14 00:59:56.203440 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 00:59:56.390724 containerd[1637]: time="2026-01-14T00:59:56.389999355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j52hn,Uid:5ba01c42-8d13-4bd5-bf19-740a1d8b7c6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"593a339b2f638b60199a85c70eda4edf0ab4764b4b420e2217835466043e5809\"" Jan 14 00:59:56.395700 kubelet[2960]: E0114 00:59:56.394741 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:56.408788 containerd[1637]: time="2026-01-14T00:59:56.408034612Z" level=info msg="CreateContainer within sandbox \"593a339b2f638b60199a85c70eda4edf0ab4764b4b420e2217835466043e5809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:59:56.448451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054748602.mount: Deactivated successfully. Jan 14 00:59:56.451099 containerd[1637]: time="2026-01-14T00:59:56.450826341Z" level=info msg="Container 649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:56.474126 containerd[1637]: time="2026-01-14T00:59:56.473770107Z" level=info msg="CreateContainer within sandbox \"593a339b2f638b60199a85c70eda4edf0ab4764b4b420e2217835466043e5809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1\"" Jan 14 00:59:56.476440 containerd[1637]: time="2026-01-14T00:59:56.475751076Z" level=info msg="StartContainer for \"649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1\"" Jan 14 00:59:56.479006 containerd[1637]: time="2026-01-14T00:59:56.478765351Z" level=info msg="connecting to shim 649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1" address="unix:///run/containerd/s/bcc3e7bded1ee3b43a3f6ce25f1e0426c8b7521e30207d48842e7374e0f4cf8f" protocol=ttrpc version=3 Jan 14 00:59:56.563072 systemd[1]: Started cri-containerd-649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1.scope - libcontainer container 649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1. Jan 14 00:59:56.879369 containerd[1637]: time="2026-01-14T00:59:56.878842825Z" level=info msg="StartContainer for \"649ba221792eb64fd2af6eea8032cfe21a671204cdb43838f704a2e213e413e1\" returns successfully" Jan 14 00:59:57.274110 systemd-networkd[1510]: veth4e212e53: Gained IPv6LL Jan 14 00:59:57.543383 kubelet[2960]: E0114 00:59:57.543027 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:57.566346 kubelet[2960]: I0114 00:59:57.566186 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j52hn" podStartSLOduration=28.566172129 podStartE2EDuration="28.566172129s" podCreationTimestamp="2026-01-14 00:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:57.566012421 +0000 UTC m=+30.762726839" watchObservedRunningTime="2026-01-14 00:59:57.566172129 +0000 UTC m=+30.762886538" Jan 14 00:59:57.684683 kubelet[2960]: E0114 00:59:57.683072 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:57.685724 containerd[1637]: time="2026-01-14T00:59:57.685304306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lqclm,Uid:74a81910-81e5-4fcb-bfc6-04e11b3a78b8,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:57.749432 systemd-networkd[1510]: veth2705d7c5: Link UP Jan 14 00:59:57.766076 kernel: cni0: port 2(veth2705d7c5) entered blocking state Jan 14 00:59:57.766190 kernel: cni0: port 2(veth2705d7c5) entered disabled state Jan 14 00:59:57.767060 kernel: veth2705d7c5: entered allmulticast mode Jan 14 00:59:57.775076 kernel: veth2705d7c5: entered promiscuous mode Jan 14 00:59:57.785824 systemd-networkd[1510]: cni0: Gained IPv6LL Jan 14 00:59:57.820777 kernel: cni0: port 2(veth2705d7c5) entered blocking state Jan 14 00:59:57.821099 kernel: cni0: port 2(veth2705d7c5) entered forwarding state Jan 14 00:59:57.824825 systemd-networkd[1510]: veth2705d7c5: Gained carrier Jan 14 00:59:57.831450 containerd[1637]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 14 00:59:57.831450 containerd[1637]: delegateAdd: netconf sent to delegate plugin: Jan 14 00:59:57.942755 containerd[1637]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-14T00:59:57.942012482Z" level=info msg="connecting to shim 49e493e22df62ee9c98bc5217bc8ee67ab14acfb931504cd7f2647927d5a5938" address="unix:///run/containerd/s/bfbb968b6c3e7d1ec52091f3c5bd51310fea43c222c59475a99b6a345fe77883" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:58.112113 systemd[1]: Started cri-containerd-49e493e22df62ee9c98bc5217bc8ee67ab14acfb931504cd7f2647927d5a5938.scope - libcontainer container 49e493e22df62ee9c98bc5217bc8ee67ab14acfb931504cd7f2647927d5a5938. Jan 14 00:59:58.184682 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 00:59:58.394230 containerd[1637]: time="2026-01-14T00:59:58.393192775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lqclm,Uid:74a81910-81e5-4fcb-bfc6-04e11b3a78b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"49e493e22df62ee9c98bc5217bc8ee67ab14acfb931504cd7f2647927d5a5938\"" Jan 14 00:59:58.400120 kubelet[2960]: E0114 00:59:58.400089 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:58.457176 containerd[1637]: time="2026-01-14T00:59:58.428199763Z" level=info msg="CreateContainer within sandbox \"49e493e22df62ee9c98bc5217bc8ee67ab14acfb931504cd7f2647927d5a5938\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:59:58.472993 containerd[1637]: time="2026-01-14T00:59:58.472194226Z" level=info msg="Container 27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:58.493020 containerd[1637]: time="2026-01-14T00:59:58.492729194Z" level=info msg="CreateContainer within sandbox \"49e493e22df62ee9c98bc5217bc8ee67ab14acfb931504cd7f2647927d5a5938\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f\"" Jan 14 00:59:58.496010 containerd[1637]: time="2026-01-14T00:59:58.495426506Z" level=info msg="StartContainer for \"27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f\"" Jan 14 00:59:58.498864 containerd[1637]: time="2026-01-14T00:59:58.498772090Z" level=info msg="connecting to shim 27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f" address="unix:///run/containerd/s/bfbb968b6c3e7d1ec52091f3c5bd51310fea43c222c59475a99b6a345fe77883" protocol=ttrpc version=3 Jan 14 00:59:58.559005 kubelet[2960]: E0114 00:59:58.558822 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:58.565226 systemd[1]: Started cri-containerd-27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f.scope - libcontainer container 27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f. Jan 14 00:59:58.692687 containerd[1637]: time="2026-01-14T00:59:58.691740309Z" level=info msg="StartContainer for \"27d982fdbb651570be4aa568a84ecb689f0818903a188ac7b5008a1cdb004b6f\" returns successfully" Jan 14 00:59:59.258317 systemd-networkd[1510]: veth2705d7c5: Gained IPv6LL Jan 14 00:59:59.573710 kubelet[2960]: E0114 00:59:59.572675 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:59.577378 kubelet[2960]: E0114 00:59:59.577305 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 00:59:59.605205 kubelet[2960]: I0114 00:59:59.604780 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lqclm" podStartSLOduration=30.60145192 podStartE2EDuration="30.60145192s" podCreationTimestamp="2026-01-14 00:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:59.597815207 +0000 UTC m=+32.794529615" watchObservedRunningTime="2026-01-14 00:59:59.60145192 +0000 UTC m=+32.798166327" Jan 14 01:00:00.586824 kubelet[2960]: E0114 01:00:00.585002 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:01.590205 kubelet[2960]: E0114 01:00:01.590100 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:20.882696 kubelet[2960]: E0114 01:00:20.880235 2960 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.191s" Jan 14 01:00:40.622805 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:44952.service - OpenSSH per-connection server daemon (10.0.0.1:44952). Jan 14 01:00:40.685319 kubelet[2960]: E0114 01:00:40.684865 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:40.804042 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 44952 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:00:40.808075 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:40.825422 systemd-logind[1605]: New session 9 of user core. Jan 14 01:00:40.838104 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:00:41.276003 sshd[4055]: Connection closed by 10.0.0.1 port 44952 Jan 14 01:00:41.277019 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:41.291077 systemd-logind[1605]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:00:41.293260 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:44952.service: Deactivated successfully. Jan 14 01:00:41.301836 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:00:41.308284 systemd-logind[1605]: Removed session 9. Jan 14 01:00:42.691858 kubelet[2960]: E0114 01:00:42.691733 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:46.309383 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:56386.service - OpenSSH per-connection server daemon (10.0.0.1:56386). Jan 14 01:00:46.481824 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 56386 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:00:46.487920 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:46.518839 systemd-logind[1605]: New session 10 of user core. Jan 14 01:00:46.536020 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:00:47.005457 sshd[4115]: Connection closed by 10.0.0.1 port 56386 Jan 14 01:00:47.006427 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:47.021298 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:56386.service: Deactivated successfully. Jan 14 01:00:47.027983 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:00:47.034078 systemd-logind[1605]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:00:47.040346 systemd-logind[1605]: Removed session 10. Jan 14 01:00:48.684920 kubelet[2960]: E0114 01:00:48.683901 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:49.700661 kubelet[2960]: E0114 01:00:49.699363 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:52.057062 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:56402.service - OpenSSH per-connection server daemon (10.0.0.1:56402). Jan 14 01:00:52.275843 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 56402 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:00:52.281388 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:52.310824 systemd-logind[1605]: New session 11 of user core. Jan 14 01:00:52.327056 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:00:52.696128 kubelet[2960]: E0114 01:00:52.691029 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:00:52.829859 sshd[4155]: Connection closed by 10.0.0.1 port 56402 Jan 14 01:00:52.831012 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:52.851871 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:56402.service: Deactivated successfully. Jan 14 01:00:52.856760 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:00:52.868393 systemd-logind[1605]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:00:52.878283 systemd-logind[1605]: Removed session 11. Jan 14 01:00:54.227334 systemd[1729]: Created slice background.slice - User Background Tasks Slice. Jan 14 01:00:54.412456 systemd[1729]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 14 01:00:54.548099 systemd[1729]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 14 01:00:57.900081 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:51834.service - OpenSSH per-connection server daemon (10.0.0.1:51834). Jan 14 01:00:58.183078 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 51834 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:00:58.197067 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:58.237070 systemd-logind[1605]: New session 12 of user core. Jan 14 01:00:58.263014 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:00:58.828851 sshd[4197]: Connection closed by 10.0.0.1 port 51834 Jan 14 01:00:58.826870 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:58.854431 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:51834.service: Deactivated successfully. Jan 14 01:00:58.861414 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:00:58.869347 systemd-logind[1605]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:00:58.880407 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:51840.service - OpenSSH per-connection server daemon (10.0.0.1:51840). Jan 14 01:00:58.887867 systemd-logind[1605]: Removed session 12. Jan 14 01:00:59.170318 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 51840 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:00:59.177841 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:59.220380 systemd-logind[1605]: New session 13 of user core. Jan 14 01:00:59.228406 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:00:59.972791 sshd[4216]: Connection closed by 10.0.0.1 port 51840 Jan 14 01:00:59.975419 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:00.005003 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:51840.service: Deactivated successfully. Jan 14 01:01:00.011886 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:01:00.016823 systemd-logind[1605]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:01:00.027369 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:51844.service - OpenSSH per-connection server daemon (10.0.0.1:51844). Jan 14 01:01:00.039854 systemd-logind[1605]: Removed session 13. Jan 14 01:01:00.345937 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 51844 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:00.354966 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:00.387408 systemd-logind[1605]: New session 14 of user core. Jan 14 01:01:00.402172 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:01:00.999816 sshd[4231]: Connection closed by 10.0.0.1 port 51844 Jan 14 01:01:00.999421 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:01.021136 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:51844.service: Deactivated successfully. Jan 14 01:01:01.030029 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:01:01.040377 systemd-logind[1605]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:01:01.048429 systemd-logind[1605]: Removed session 14. Jan 14 01:01:06.034948 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:42278.service - OpenSSH per-connection server daemon (10.0.0.1:42278). Jan 14 01:01:06.305840 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 42278 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:06.320000 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:06.352037 systemd-logind[1605]: New session 15 of user core. Jan 14 01:01:06.363131 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:01:07.020154 sshd[4274]: Connection closed by 10.0.0.1 port 42278 Jan 14 01:01:07.020867 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:07.038903 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:42278.service: Deactivated successfully. Jan 14 01:01:07.048107 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:01:07.056370 systemd-logind[1605]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:01:07.063442 systemd-logind[1605]: Removed session 15. Jan 14 01:01:10.690765 kubelet[2960]: E0114 01:01:10.689180 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:01:12.080425 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:42290.service - OpenSSH per-connection server daemon (10.0.0.1:42290). Jan 14 01:01:12.328880 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 42290 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:12.336105 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:12.381814 systemd-logind[1605]: New session 16 of user core. Jan 14 01:01:12.418842 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:01:12.976062 sshd[4331]: Connection closed by 10.0.0.1 port 42290 Jan 14 01:01:12.978841 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:13.007371 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:42290.service: Deactivated successfully. Jan 14 01:01:13.020380 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:01:13.026054 systemd-logind[1605]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:01:13.057426 systemd-logind[1605]: Removed session 16. Jan 14 01:01:18.042844 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:43464.service - OpenSSH per-connection server daemon (10.0.0.1:43464). Jan 14 01:01:18.349036 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 43464 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:18.358795 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:18.403897 systemd-logind[1605]: New session 17 of user core. Jan 14 01:01:18.423920 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:01:19.252163 sshd[4369]: Connection closed by 10.0.0.1 port 43464 Jan 14 01:01:19.253826 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:19.297983 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:43464.service: Deactivated successfully. Jan 14 01:01:19.324817 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:01:19.336133 systemd-logind[1605]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:01:19.352736 systemd-logind[1605]: Removed session 17. Jan 14 01:01:22.684080 kubelet[2960]: E0114 01:01:22.683059 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:01:24.320906 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:39628.service - OpenSSH per-connection server daemon (10.0.0.1:39628). Jan 14 01:01:24.622896 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 39628 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:24.630191 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:24.679100 systemd-logind[1605]: New session 18 of user core. Jan 14 01:01:24.690126 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:01:25.327092 sshd[4406]: Connection closed by 10.0.0.1 port 39628 Jan 14 01:01:25.327943 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:25.365004 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:39628.service: Deactivated successfully. Jan 14 01:01:25.376986 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:01:25.383945 systemd-logind[1605]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:01:25.405996 systemd-logind[1605]: Removed session 18. Jan 14 01:01:30.384821 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:39640.service - OpenSSH per-connection server daemon (10.0.0.1:39640). Jan 14 01:01:30.724118 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 39640 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:30.736758 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:30.807803 systemd-logind[1605]: New session 19 of user core. Jan 14 01:01:30.824108 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:01:31.451825 sshd[4448]: Connection closed by 10.0.0.1 port 39640 Jan 14 01:01:31.467091 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:31.487167 systemd-logind[1605]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:01:31.490925 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:39640.service: Deactivated successfully. Jan 14 01:01:31.506014 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:01:31.532424 systemd-logind[1605]: Removed session 19. Jan 14 01:01:36.484727 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:48864.service - OpenSSH per-connection server daemon (10.0.0.1:48864). Jan 14 01:01:36.653211 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 48864 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:36.658013 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:36.682094 systemd-logind[1605]: New session 20 of user core. Jan 14 01:01:36.696816 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:01:37.040860 sshd[4488]: Connection closed by 10.0.0.1 port 48864 Jan 14 01:01:37.042195 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:37.052063 systemd-logind[1605]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:01:37.056685 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:48864.service: Deactivated successfully. Jan 14 01:01:37.060908 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:01:37.067117 systemd-logind[1605]: Removed session 20. Jan 14 01:01:42.067197 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:48876.service - OpenSSH per-connection server daemon (10.0.0.1:48876). Jan 14 01:01:42.219060 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 48876 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:42.222078 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:42.237829 systemd-logind[1605]: New session 21 of user core. Jan 14 01:01:42.264180 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:01:42.499280 sshd[4532]: Connection closed by 10.0.0.1 port 48876 Jan 14 01:01:42.500143 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:42.516125 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:48876.service: Deactivated successfully. Jan 14 01:01:42.520448 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:01:42.523026 systemd-logind[1605]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:01:42.528072 systemd-logind[1605]: Removed session 21. Jan 14 01:01:42.533848 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:50954.service - OpenSSH per-connection server daemon (10.0.0.1:50954). Jan 14 01:01:42.659393 sshd[4549]: Accepted publickey for core from 10.0.0.1 port 50954 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:42.662919 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:42.676977 systemd-logind[1605]: New session 22 of user core. Jan 14 01:01:42.688865 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:01:43.320950 sshd[4553]: Connection closed by 10.0.0.1 port 50954 Jan 14 01:01:43.321114 sshd-session[4549]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:43.337749 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:50954.service: Deactivated successfully. Jan 14 01:01:43.342284 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:01:43.347671 systemd-logind[1605]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:01:43.353863 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:50962.service - OpenSSH per-connection server daemon (10.0.0.1:50962). Jan 14 01:01:43.359923 systemd-logind[1605]: Removed session 22. Jan 14 01:01:43.543150 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 50962 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:43.550204 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:43.575052 systemd-logind[1605]: New session 23 of user core. Jan 14 01:01:43.596965 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:01:44.782709 sshd[4583]: Connection closed by 10.0.0.1 port 50962 Jan 14 01:01:44.784459 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:44.798941 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:50976.service - OpenSSH per-connection server daemon (10.0.0.1:50976). Jan 14 01:01:44.811915 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:50962.service: Deactivated successfully. Jan 14 01:01:44.819779 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:01:44.820974 systemd[1]: session-23.scope: Consumed 1.025s CPU time, 41.3M memory peak. Jan 14 01:01:44.827943 systemd-logind[1605]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:01:44.830951 systemd-logind[1605]: Removed session 23. Jan 14 01:01:44.939164 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 50976 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:44.943884 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:44.957886 systemd-logind[1605]: New session 24 of user core. Jan 14 01:01:44.967769 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:01:45.410676 sshd[4608]: Connection closed by 10.0.0.1 port 50976 Jan 14 01:01:45.412015 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:45.426894 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:50976.service: Deactivated successfully. Jan 14 01:01:45.434824 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:01:45.442939 systemd-logind[1605]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:01:45.453990 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:50984.service - OpenSSH per-connection server daemon (10.0.0.1:50984). Jan 14 01:01:45.456759 systemd-logind[1605]: Removed session 24. Jan 14 01:01:45.577608 sshd[4619]: Accepted publickey for core from 10.0.0.1 port 50984 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:45.591076 sshd-session[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:45.608887 systemd-logind[1605]: New session 25 of user core. Jan 14 01:01:45.623959 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:01:45.684821 kubelet[2960]: E0114 01:01:45.683920 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:01:45.893233 sshd[4623]: Connection closed by 10.0.0.1 port 50984 Jan 14 01:01:45.893973 sshd-session[4619]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:45.907238 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:50984.service: Deactivated successfully. Jan 14 01:01:45.913162 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:01:45.924218 systemd-logind[1605]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:01:45.930860 systemd-logind[1605]: Removed session 25. Jan 14 01:01:50.917217 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:50990.service - OpenSSH per-connection server daemon (10.0.0.1:50990). Jan 14 01:01:51.092274 sshd[4656]: Accepted publickey for core from 10.0.0.1 port 50990 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:51.099065 sshd-session[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:51.131929 systemd-logind[1605]: New session 26 of user core. Jan 14 01:01:51.144422 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:01:51.467254 sshd[4660]: Connection closed by 10.0.0.1 port 50990 Jan 14 01:01:51.467788 sshd-session[4656]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:51.478015 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:50990.service: Deactivated successfully. Jan 14 01:01:51.484975 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:01:51.488430 systemd-logind[1605]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:01:51.493639 systemd-logind[1605]: Removed session 26. Jan 14 01:01:56.494761 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:45572.service - OpenSSH per-connection server daemon (10.0.0.1:45572). Jan 14 01:01:56.634741 sshd[4693]: Accepted publickey for core from 10.0.0.1 port 45572 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:01:56.639004 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:01:56.658293 systemd-logind[1605]: New session 27 of user core. Jan 14 01:01:56.670868 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:01:56.908202 sshd[4697]: Connection closed by 10.0.0.1 port 45572 Jan 14 01:01:56.909061 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Jan 14 01:01:56.921255 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:45572.service: Deactivated successfully. Jan 14 01:01:56.927087 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:01:56.932257 systemd-logind[1605]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:01:56.937006 systemd-logind[1605]: Removed session 27. Jan 14 01:01:57.687650 kubelet[2960]: E0114 01:01:57.683677 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:02:01.937731 systemd[1]: Started sshd@26-10.0.0.53:22-10.0.0.1:45588.service - OpenSSH per-connection server daemon (10.0.0.1:45588). Jan 14 01:02:02.089258 sshd[4730]: Accepted publickey for core from 10.0.0.1 port 45588 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:02:02.096884 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:02.112903 systemd-logind[1605]: New session 28 of user core. Jan 14 01:02:02.131845 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:02:02.374266 sshd[4740]: Connection closed by 10.0.0.1 port 45588 Jan 14 01:02:02.375983 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:02.386796 systemd[1]: sshd@26-10.0.0.53:22-10.0.0.1:45588.service: Deactivated successfully. Jan 14 01:02:02.393192 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:02:02.397738 systemd-logind[1605]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:02:02.400883 systemd-logind[1605]: Removed session 28. Jan 14 01:02:05.686763 kubelet[2960]: E0114 01:02:05.684705 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:02:06.684769 kubelet[2960]: E0114 01:02:06.684026 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:02:07.419988 systemd[1]: Started sshd@27-10.0.0.53:22-10.0.0.1:32870.service - OpenSSH per-connection server daemon (10.0.0.1:32870). Jan 14 01:02:07.567270 sshd[4777]: Accepted publickey for core from 10.0.0.1 port 32870 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:02:07.573861 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:07.597025 systemd-logind[1605]: New session 29 of user core. Jan 14 01:02:07.609096 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:02:07.920989 sshd[4781]: Connection closed by 10.0.0.1 port 32870 Jan 14 01:02:07.922298 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:07.936203 systemd[1]: sshd@27-10.0.0.53:22-10.0.0.1:32870.service: Deactivated successfully. Jan 14 01:02:07.943805 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:02:07.951029 systemd-logind[1605]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:02:07.954860 systemd-logind[1605]: Removed session 29. Jan 14 01:02:12.955714 systemd[1]: Started sshd@28-10.0.0.53:22-10.0.0.1:37756.service - OpenSSH per-connection server daemon (10.0.0.1:37756). Jan 14 01:02:13.137307 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 37756 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:02:13.144024 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:13.176019 systemd-logind[1605]: New session 30 of user core. Jan 14 01:02:13.192218 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:02:13.566074 sshd[4821]: Connection closed by 10.0.0.1 port 37756 Jan 14 01:02:13.568015 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:13.590897 systemd[1]: sshd@28-10.0.0.53:22-10.0.0.1:37756.service: Deactivated successfully. Jan 14 01:02:13.596862 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:02:13.599455 systemd-logind[1605]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:02:13.610282 systemd-logind[1605]: Removed session 30. Jan 14 01:02:18.595689 systemd[1]: Started sshd@29-10.0.0.53:22-10.0.0.1:37758.service - OpenSSH per-connection server daemon (10.0.0.1:37758). Jan 14 01:02:18.684703 kubelet[2960]: E0114 01:02:18.684113 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:02:18.758697 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 37758 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:02:18.764293 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:18.783088 systemd-logind[1605]: New session 31 of user core. Jan 14 01:02:18.793214 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 01:02:19.141116 sshd[4873]: Connection closed by 10.0.0.1 port 37758 Jan 14 01:02:19.142945 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:19.160914 systemd[1]: sshd@29-10.0.0.53:22-10.0.0.1:37758.service: Deactivated successfully. Jan 14 01:02:19.168853 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 01:02:19.173864 systemd-logind[1605]: Session 31 logged out. Waiting for processes to exit. Jan 14 01:02:19.178902 systemd-logind[1605]: Removed session 31. Jan 14 01:02:22.683807 kubelet[2960]: E0114 01:02:22.682423 2960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:02:24.174864 systemd[1]: Started sshd@30-10.0.0.53:22-10.0.0.1:57150.service - OpenSSH per-connection server daemon (10.0.0.1:57150). Jan 14 01:02:24.341060 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:02:24.346052 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:24.379190 systemd-logind[1605]: New session 32 of user core. Jan 14 01:02:24.395001 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 01:02:24.901117 sshd[4912]: Connection closed by 10.0.0.1 port 57150 Jan 14 01:02:24.901792 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:24.918137 systemd-logind[1605]: Session 32 logged out. Waiting for processes to exit. Jan 14 01:02:24.920948 systemd[1]: sshd@30-10.0.0.53:22-10.0.0.1:57150.service: Deactivated successfully. Jan 14 01:02:24.928050 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 01:02:24.936255 systemd-logind[1605]: Removed session 32.