Jan 20 15:01:39.514026 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 12:22:36 -00 2026 Jan 20 15:01:39.514051 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 15:01:39.514063 kernel: BIOS-provided physical RAM map: Jan 20 15:01:39.514069 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 15:01:39.514076 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 15:01:39.514082 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 15:01:39.514089 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 15:01:39.514095 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 15:01:39.514101 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 15:01:39.514107 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 15:01:39.514115 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 15:01:39.514161 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 15:01:39.514169 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 15:01:39.514175 kernel: NX (Execute Disable) protection: active Jan 20 15:01:39.514187 kernel: APIC: Static calls initialized Jan 20 15:01:39.514203 kernel: SMBIOS 2.8 present. Jan 20 15:01:39.514213 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 15:01:39.514225 kernel: DMI: Memory slots populated: 1/1 Jan 20 15:01:39.514237 kernel: Hypervisor detected: KVM Jan 20 15:01:39.514248 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 15:01:39.514259 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 15:01:39.514270 kernel: kvm-clock: using sched offset of 4409305832 cycles Jan 20 15:01:39.514282 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 15:01:39.514294 kernel: tsc: Detected 2445.426 MHz processor Jan 20 15:01:39.514305 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 15:01:39.514323 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 15:01:39.514335 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 15:01:39.514348 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 15:01:39.514359 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 15:01:39.514370 kernel: Using GB pages for direct mapping Jan 20 15:01:39.514381 kernel: ACPI: Early table checksum verification disabled Jan 20 15:01:39.514391 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 15:01:39.514407 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514418 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514430 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514442 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 15:01:39.514455 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514467 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514480 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514498 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:01:39.514516 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 15:01:39.514529 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 15:01:39.514542 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 15:01:39.514557 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 15:01:39.514569 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 15:01:39.514581 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 15:01:39.514593 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 15:01:39.514682 kernel: No NUMA configuration found Jan 20 15:01:39.514695 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 15:01:39.514708 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 15:01:39.514721 kernel: Zone ranges: Jan 20 15:01:39.514738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 15:01:39.514749 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 15:01:39.514760 kernel: Normal empty Jan 20 15:01:39.514771 kernel: Device empty Jan 20 15:01:39.514782 kernel: Movable zone start for each node Jan 20 15:01:39.514793 kernel: Early memory node ranges Jan 20 15:01:39.514804 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 15:01:39.514817 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 15:01:39.514830 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 15:01:39.514837 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 15:01:39.514844 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 15:01:39.514852 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 15:01:39.514859 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 15:01:39.514867 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 15:01:39.514874 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 15:01:39.514883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 15:01:39.514891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 15:01:39.514898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 15:01:39.514905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 15:01:39.514912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 15:01:39.514919 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 15:01:39.514926 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 15:01:39.514933 kernel: TSC deadline timer available Jan 20 15:01:39.514946 kernel: CPU topo: Max. logical packages: 1 Jan 20 15:01:39.514959 kernel: CPU topo: Max. logical dies: 1 Jan 20 15:01:39.514971 kernel: CPU topo: Max. dies per package: 1 Jan 20 15:01:39.514983 kernel: CPU topo: Max. threads per core: 1 Jan 20 15:01:39.514996 kernel: CPU topo: Num. cores per package: 4 Jan 20 15:01:39.515007 kernel: CPU topo: Num. threads per package: 4 Jan 20 15:01:39.515018 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 15:01:39.515028 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 15:01:39.515044 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 15:01:39.515055 kernel: kvm-guest: setup PV sched yield Jan 20 15:01:39.515063 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 15:01:39.515070 kernel: Booting paravirtualized kernel on KVM Jan 20 15:01:39.515077 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 15:01:39.515085 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 15:01:39.515092 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 15:01:39.515101 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 15:01:39.515108 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 15:01:39.515116 kernel: kvm-guest: PV spinlocks enabled Jan 20 15:01:39.515172 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 15:01:39.515188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 15:01:39.515201 kernel: random: crng init done Jan 20 15:01:39.515213 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 15:01:39.515229 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 15:01:39.515240 kernel: Fallback order for Node 0: 0 Jan 20 15:01:39.515253 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 15:01:39.515265 kernel: Policy zone: DMA32 Jan 20 15:01:39.515276 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 15:01:39.515288 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 15:01:39.515299 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 15:01:39.515314 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 15:01:39.515325 kernel: Dynamic Preempt: voluntary Jan 20 15:01:39.515337 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 15:01:39.515349 kernel: rcu: RCU event tracing is enabled. Jan 20 15:01:39.515364 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 15:01:39.515376 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 15:01:39.515389 kernel: Rude variant of Tasks RCU enabled. Jan 20 15:01:39.515400 kernel: Tracing variant of Tasks RCU enabled. Jan 20 15:01:39.515416 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 15:01:39.515427 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 15:01:39.515439 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 15:01:39.515452 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 15:01:39.515464 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 15:01:39.515478 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 15:01:39.515490 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 15:01:39.515517 kernel: Console: colour VGA+ 80x25 Jan 20 15:01:39.515529 kernel: printk: legacy console [ttyS0] enabled Jan 20 15:01:39.515543 kernel: ACPI: Core revision 20240827 Jan 20 15:01:39.515555 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 15:01:39.515567 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 15:01:39.515578 kernel: x2apic enabled Jan 20 15:01:39.515590 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 15:01:39.515658 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 15:01:39.515670 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 15:01:39.515678 kernel: kvm-guest: setup PV IPIs Jan 20 15:01:39.515685 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 15:01:39.515693 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 15:01:39.515701 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 15:01:39.515710 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 15:01:39.515718 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 15:01:39.515725 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 15:01:39.515733 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 15:01:39.515740 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 15:01:39.515748 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 15:01:39.515755 kernel: Speculative Store Bypass: Vulnerable Jan 20 15:01:39.515765 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 15:01:39.515773 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 15:01:39.515785 kernel: active return thunk: srso_alias_return_thunk Jan 20 15:01:39.515799 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 15:01:39.515811 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 15:01:39.515825 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 15:01:39.515838 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 15:01:39.515856 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 15:01:39.515868 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 15:01:39.515881 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 15:01:39.515894 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 15:01:39.515908 kernel: Freeing SMP alternatives memory: 32K Jan 20 15:01:39.515921 kernel: pid_max: default: 32768 minimum: 301 Jan 20 15:01:39.515936 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 15:01:39.515951 kernel: landlock: Up and running. Jan 20 15:01:39.515962 kernel: SELinux: Initializing. Jan 20 15:01:39.515975 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 15:01:39.515989 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 15:01:39.515998 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 15:01:39.516006 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 15:01:39.516013 kernel: signal: max sigframe size: 1776 Jan 20 15:01:39.516023 kernel: rcu: Hierarchical SRCU implementation. Jan 20 15:01:39.516031 kernel: rcu: Max phase no-delay instances is 400. Jan 20 15:01:39.516040 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 15:01:39.516054 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 15:01:39.516066 kernel: smp: Bringing up secondary CPUs ... Jan 20 15:01:39.516079 kernel: smpboot: x86: Booting SMP configuration: Jan 20 15:01:39.516091 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 15:01:39.516107 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 15:01:39.516118 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 15:01:39.516181 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120524K reserved, 0K cma-reserved) Jan 20 15:01:39.516194 kernel: devtmpfs: initialized Jan 20 15:01:39.516213 kernel: x86/mm: Memory block size: 128MB Jan 20 15:01:39.516226 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 15:01:39.516239 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 15:01:39.516257 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 15:01:39.516270 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 15:01:39.516282 kernel: audit: initializing netlink subsys (disabled) Jan 20 15:01:39.516295 kernel: audit: type=2000 audit(1768921294.934:1): state=initialized audit_enabled=0 res=1 Jan 20 15:01:39.516309 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 15:01:39.516322 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 15:01:39.516336 kernel: cpuidle: using governor menu Jan 20 15:01:39.516353 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 15:01:39.516366 kernel: dca service started, version 1.12.1 Jan 20 15:01:39.516379 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 15:01:39.516391 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 15:01:39.516403 kernel: PCI: Using configuration type 1 for base access Jan 20 15:01:39.516416 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 15:01:39.516428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 15:01:39.516444 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 15:01:39.516459 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 15:01:39.516471 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 15:01:39.516485 kernel: ACPI: Added _OSI(Module Device) Jan 20 15:01:39.516499 kernel: ACPI: Added _OSI(Processor Device) Jan 20 15:01:39.516511 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 15:01:39.516523 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 15:01:39.516539 kernel: ACPI: Interpreter enabled Jan 20 15:01:39.516551 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 15:01:39.516564 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 15:01:39.516576 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 15:01:39.516589 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 15:01:39.516657 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 15:01:39.516671 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 15:01:39.517025 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 15:01:39.517310 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 15:01:39.517522 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 15:01:39.517541 kernel: PCI host bridge to bus 0000:00 Jan 20 15:01:39.517830 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 15:01:39.518042 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 15:01:39.518254 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 15:01:39.518448 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 15:01:39.518722 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 15:01:39.518886 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 15:01:39.519044 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 15:01:39.519280 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 15:01:39.519531 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 15:01:39.519872 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 15:01:39.520185 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 15:01:39.520416 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 15:01:39.520721 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 15:01:39.521007 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 15:01:39.521319 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 15:01:39.521574 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 15:01:39.521913 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 15:01:39.522238 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 15:01:39.522509 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 15:01:39.522862 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 15:01:39.523196 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 15:01:39.523567 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 15:01:39.524076 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 15:01:39.524400 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 15:01:39.524728 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 15:01:39.525004 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 15:01:39.525441 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 15:01:39.525882 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 15:01:39.526210 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 15:01:39.526470 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 15:01:39.526808 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 15:01:39.527079 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 15:01:39.527372 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 15:01:39.527392 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 15:01:39.527405 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 15:01:39.527417 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 15:01:39.527435 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 15:01:39.527448 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 15:01:39.527461 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 15:01:39.527475 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 15:01:39.527488 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 15:01:39.527501 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 15:01:39.527513 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 15:01:39.527528 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 15:01:39.527540 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 15:01:39.527553 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 15:01:39.527566 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 15:01:39.527578 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 15:01:39.527590 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 15:01:39.527658 kernel: iommu: Default domain type: Translated Jan 20 15:01:39.527674 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 15:01:39.527686 kernel: PCI: Using ACPI for IRQ routing Jan 20 15:01:39.527699 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 15:01:39.527711 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 15:01:39.527723 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 15:01:39.527969 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 15:01:39.528290 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 15:01:39.528560 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 15:01:39.528579 kernel: vgaarb: loaded Jan 20 15:01:39.528593 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 15:01:39.528686 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 15:01:39.528701 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 15:01:39.528714 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 15:01:39.528726 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 15:01:39.528746 kernel: pnp: PnP ACPI init Jan 20 15:01:39.529028 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 15:01:39.529051 kernel: pnp: PnP ACPI: found 6 devices Jan 20 15:01:39.529065 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 15:01:39.529078 kernel: NET: Registered PF_INET protocol family Jan 20 15:01:39.529092 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 15:01:39.529110 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 15:01:39.529170 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 15:01:39.529186 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 15:01:39.529199 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 15:01:39.529210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 15:01:39.529222 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 15:01:39.529235 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 15:01:39.529253 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 15:01:39.529266 kernel: NET: Registered PF_XDP protocol family Jan 20 15:01:39.529527 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 15:01:39.529842 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 15:01:39.530075 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 15:01:39.530343 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 15:01:39.530570 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 15:01:39.530878 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 15:01:39.530898 kernel: PCI: CLS 0 bytes, default 64 Jan 20 15:01:39.530911 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 15:01:39.530926 kernel: Initialise system trusted keyrings Jan 20 15:01:39.530940 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 15:01:39.530953 kernel: Key type asymmetric registered Jan 20 15:01:39.530965 kernel: Asymmetric key parser 'x509' registered Jan 20 15:01:39.530983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 15:01:39.530996 kernel: io scheduler mq-deadline registered Jan 20 15:01:39.531011 kernel: io scheduler kyber registered Jan 20 15:01:39.531023 kernel: io scheduler bfq registered Jan 20 15:01:39.531035 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 15:01:39.531048 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 15:01:39.531061 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 15:01:39.531080 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 15:01:39.531095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 15:01:39.531108 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 15:01:39.531157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 15:01:39.531171 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 15:01:39.531183 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 15:01:39.531459 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 15:01:39.531487 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 20 15:01:39.531855 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 15:01:39.532095 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T15:01:37 UTC (1768921297) Jan 20 15:01:39.532401 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 15:01:39.532423 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 15:01:39.532437 kernel: NET: Registered PF_INET6 protocol family Jan 20 15:01:39.532458 kernel: Segment Routing with IPv6 Jan 20 15:01:39.532471 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 15:01:39.532484 kernel: NET: Registered PF_PACKET protocol family Jan 20 15:01:39.532497 kernel: Key type dns_resolver registered Jan 20 15:01:39.532510 kernel: IPI shorthand broadcast: enabled Jan 20 15:01:39.532525 kernel: sched_clock: Marking stable (2337049267, 434074724)->(2934729921, -163605930) Jan 20 15:01:39.532538 kernel: registered taskstats version 1 Jan 20 15:01:39.532551 kernel: Loading compiled-in X.509 certificates Jan 20 15:01:39.532569 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 34a030021dd6c1575d5ad60346eaf4cdadaee6ef' Jan 20 15:01:39.532584 kernel: Demotion targets for Node 0: null Jan 20 15:01:39.532597 kernel: Key type .fscrypt registered Jan 20 15:01:39.532704 kernel: Key type fscrypt-provisioning registered Jan 20 15:01:39.532716 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 15:01:39.532728 kernel: ima: Allocated hash algorithm: sha1 Jan 20 15:01:39.532746 kernel: ima: No architecture policies found Jan 20 15:01:39.532759 kernel: clk: Disabling unused clocks Jan 20 15:01:39.532773 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 20 15:01:39.532786 kernel: Write protecting the kernel read-only data: 47104k Jan 20 15:01:39.532799 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 15:01:39.532811 kernel: Run /init as init process Jan 20 15:01:39.532823 kernel: with arguments: Jan 20 15:01:39.532835 kernel: /init Jan 20 15:01:39.532852 kernel: with environment: Jan 20 15:01:39.532865 kernel: HOME=/ Jan 20 15:01:39.532877 kernel: TERM=linux Jan 20 15:01:39.532890 kernel: SCSI subsystem initialized Jan 20 15:01:39.532904 kernel: libata version 3.00 loaded. Jan 20 15:01:39.533228 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 15:01:39.533253 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 15:01:39.533570 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 15:01:39.533914 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 15:01:39.534217 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 15:01:39.534523 kernel: scsi host0: ahci Jan 20 15:01:39.534880 kernel: scsi host1: ahci Jan 20 15:01:39.535231 kernel: scsi host2: ahci Jan 20 15:01:39.535457 kernel: scsi host3: ahci Jan 20 15:01:39.535830 kernel: scsi host4: ahci Jan 20 15:01:39.536169 kernel: scsi host5: ahci Jan 20 15:01:39.536194 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 20 15:01:39.536209 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 20 15:01:39.536228 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 20 15:01:39.536244 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 20 15:01:39.536257 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 20 15:01:39.536271 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 20 15:01:39.536285 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 15:01:39.536299 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 15:01:39.536312 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 15:01:39.536331 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 15:01:39.536345 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 15:01:39.536359 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 15:01:39.536374 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 15:01:39.536389 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 15:01:39.536403 kernel: ata3.00: applying bridge limits Jan 20 15:01:39.536416 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 15:01:39.536433 kernel: ata3.00: configured for UDMA/100 Jan 20 15:01:39.536804 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 15:01:39.537105 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 15:01:39.537463 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 15:01:39.537485 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 15:01:39.537499 kernel: GPT:16515071 != 27000831 Jan 20 15:01:39.537519 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 15:01:39.537534 kernel: GPT:16515071 != 27000831 Jan 20 15:01:39.537548 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 15:01:39.537561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 15:01:39.537918 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 15:01:39.537939 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 15:01:39.538245 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 15:01:39.538274 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 15:01:39.538286 kernel: device-mapper: uevent: version 1.0.3 Jan 20 15:01:39.538299 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 15:01:39.538312 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 15:01:39.538325 kernel: raid6: avx2x4 gen() 34189 MB/s Jan 20 15:01:39.538343 kernel: raid6: avx2x2 gen() 33191 MB/s Jan 20 15:01:39.538358 kernel: raid6: avx2x1 gen() 26018 MB/s Jan 20 15:01:39.538376 kernel: raid6: using algorithm avx2x4 gen() 34189 MB/s Jan 20 15:01:39.538389 kernel: raid6: .... xor() 4769 MB/s, rmw enabled Jan 20 15:01:39.538403 kernel: raid6: using avx2x2 recovery algorithm Jan 20 15:01:39.538416 kernel: xor: automatically using best checksumming function avx Jan 20 15:01:39.538431 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 15:01:39.538446 kernel: BTRFS: device fsid 17137bed-8163-406c-98f9-6d4bb6770bf0 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (182) Jan 20 15:01:39.538465 kernel: BTRFS info (device dm-0): first mount of filesystem 17137bed-8163-406c-98f9-6d4bb6770bf0 Jan 20 15:01:39.538480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:01:39.538494 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 15:01:39.538509 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 15:01:39.538522 kernel: loop: module loaded Jan 20 15:01:39.538541 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 15:01:39.538555 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 15:01:39.538570 systemd[1]: Successfully made /usr/ read-only. Jan 20 15:01:39.538589 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 15:01:39.538728 systemd[1]: Detected virtualization kvm. Jan 20 15:01:39.538748 systemd[1]: Detected architecture x86-64. Jan 20 15:01:39.538769 systemd[1]: Running in initrd. Jan 20 15:01:39.538784 systemd[1]: No hostname configured, using default hostname. Jan 20 15:01:39.538798 systemd[1]: Hostname set to . Jan 20 15:01:39.538811 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 15:01:39.538823 systemd[1]: Queued start job for default target initrd.target. Jan 20 15:01:39.538836 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 15:01:39.538854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 15:01:39.538869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 15:01:39.538884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 15:01:39.538899 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 15:01:39.538917 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 15:01:39.538932 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 15:01:39.538951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 15:01:39.538966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 15:01:39.538979 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 15:01:39.538992 systemd[1]: Reached target paths.target - Path Units. Jan 20 15:01:39.539007 systemd[1]: Reached target slices.target - Slice Units. Jan 20 15:01:39.539021 systemd[1]: Reached target swap.target - Swaps. Jan 20 15:01:39.539035 systemd[1]: Reached target timers.target - Timer Units. Jan 20 15:01:39.539054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 15:01:39.539069 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 15:01:39.539084 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 15:01:39.539099 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 15:01:39.539113 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 15:01:39.539183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 15:01:39.539199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 15:01:39.539220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 15:01:39.539235 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 15:01:39.539250 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 15:01:39.539266 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 15:01:39.539282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 15:01:39.539296 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 15:01:39.539318 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 15:01:39.539332 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 15:01:39.539345 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 15:01:39.539357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 15:01:39.539372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:01:39.539392 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 15:01:39.539407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 15:01:39.539422 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 15:01:39.539437 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 15:01:39.539502 systemd-journald[320]: Collecting audit messages is enabled. Jan 20 15:01:39.539543 systemd-journald[320]: Journal started Jan 20 15:01:39.539571 systemd-journald[320]: Runtime Journal (/run/log/journal/664175ccd5e645ae931d8eb5935c8b51) is 6M, max 48.2M, 42.1M free. Jan 20 15:01:39.540678 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 15:01:39.559661 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 15:01:39.562798 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 15:01:39.729395 kernel: Bridge firewalling registered Jan 20 15:01:39.729429 kernel: audit: type=1130 audit(1768921299.718:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.720118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 15:01:39.747698 kernel: audit: type=1130 audit(1768921299.733:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.747869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:01:39.765487 kernel: audit: type=1130 audit(1768921299.748:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.765586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 15:01:39.783037 kernel: audit: type=1130 audit(1768921299.766:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.787350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 15:01:39.790691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 15:01:39.809753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 15:01:39.813079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 15:01:39.828413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 15:01:39.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.845678 kernel: audit: type=1130 audit(1768921299.836:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.845264 systemd-tmpfiles[343]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 15:01:39.852072 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 15:01:39.864396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 15:01:39.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.935531 kernel: audit: type=1130 audit(1768921299.870:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.874232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 15:01:39.946110 kernel: audit: type=1130 audit(1768921299.935:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.953488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 15:01:40.017713 kernel: audit: type=1130 audit(1768921299.954:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.017758 kernel: audit: type=1334 audit(1768921299.957:10): prog-id=6 op=LOAD Jan 20 15:01:39.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:39.957000 audit: BPF prog-id=6 op=LOAD Jan 20 15:01:39.962826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 15:01:40.045256 dracut-cmdline[355]: dracut-109 Jan 20 15:01:40.050195 dracut-cmdline[355]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 15:01:40.102689 systemd-resolved[360]: Positive Trust Anchors: Jan 20 15:01:40.102721 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 15:01:40.102728 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 15:01:40.102773 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 15:01:40.135348 systemd-resolved[360]: Defaulting to hostname 'linux'. Jan 20 15:01:40.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.136988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 15:01:40.141191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 15:01:40.248688 kernel: Loading iSCSI transport class v2.0-870. Jan 20 15:01:40.268743 kernel: iscsi: registered transport (tcp) Jan 20 15:01:40.293769 kernel: iscsi: registered transport (qla4xxx) Jan 20 15:01:40.293832 kernel: QLogic iSCSI HBA Driver Jan 20 15:01:40.334015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 15:01:40.368526 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 15:01:40.386681 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:01:40.386713 kernel: audit: type=1130 audit(1768921300.372:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.374653 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 15:01:40.463458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 15:01:40.480435 kernel: audit: type=1130 audit(1768921300.464:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.467396 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 15:01:40.483252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 15:01:40.540229 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 15:01:40.558087 kernel: audit: type=1130 audit(1768921300.541:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.558000 audit: BPF prog-id=7 op=LOAD Jan 20 15:01:40.560024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 15:01:40.573868 kernel: audit: type=1334 audit(1768921300.558:15): prog-id=7 op=LOAD Jan 20 15:01:40.573890 kernel: audit: type=1334 audit(1768921300.558:16): prog-id=8 op=LOAD Jan 20 15:01:40.558000 audit: BPF prog-id=8 op=LOAD Jan 20 15:01:40.601254 systemd-udevd[587]: Using default interface naming scheme 'v257'. Jan 20 15:01:40.617395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 15:01:40.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.631932 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 15:01:40.645849 kernel: audit: type=1130 audit(1768921300.628:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.674498 dracut-pre-trigger[644]: rd.md=0: removing MD RAID activation Jan 20 15:01:40.713801 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 15:01:40.731360 kernel: audit: type=1130 audit(1768921300.715:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.731409 kernel: audit: type=1334 audit(1768921300.715:19): prog-id=9 op=LOAD Jan 20 15:01:40.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.715000 audit: BPF prog-id=9 op=LOAD Jan 20 15:01:40.718217 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 15:01:40.750956 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 15:01:40.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.761456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 15:01:40.769217 kernel: audit: type=1130 audit(1768921300.757:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.824713 systemd-networkd[722]: lo: Link UP Jan 20 15:01:40.824753 systemd-networkd[722]: lo: Gained carrier Jan 20 15:01:40.843887 kernel: audit: type=1130 audit(1768921300.830:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.825797 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 15:01:40.831724 systemd[1]: Reached target network.target - Network. Jan 20 15:01:40.893820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 15:01:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:40.905462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 15:01:40.976827 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 15:01:41.012534 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 15:01:41.064780 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 15:01:41.058523 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 15:01:41.065997 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:01:41.066003 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 15:01:41.098727 kernel: AES CTR mode by8 optimization enabled Jan 20 15:01:41.067260 systemd-networkd[722]: eth0: Link UP Jan 20 15:01:41.067501 systemd-networkd[722]: eth0: Gained carrier Jan 20 15:01:41.067511 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:01:41.134538 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 15:01:41.106536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 15:01:41.125410 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 15:01:41.135942 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 15:01:41.159988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 15:01:41.160126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:01:41.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:41.171695 disk-uuid[830]: Primary Header is updated. Jan 20 15:01:41.171695 disk-uuid[830]: Secondary Entries is updated. Jan 20 15:01:41.171695 disk-uuid[830]: Secondary Header is updated. Jan 20 15:01:41.165564 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:01:41.178434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:01:41.277027 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 15:01:41.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:41.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:41.440113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:01:41.446071 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 15:01:41.453224 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 15:01:41.458744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 15:01:41.463860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 15:01:41.507004 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 15:01:41.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.230993 disk-uuid[831]: Warning: The kernel is still using the old partition table. Jan 20 15:01:42.230993 disk-uuid[831]: The new table will be used at the next reboot or after you Jan 20 15:01:42.230993 disk-uuid[831]: run partprobe(8) or kpartx(8) Jan 20 15:01:42.230993 disk-uuid[831]: The operation has completed successfully. Jan 20 15:01:42.244470 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 15:01:42.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.244674 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 15:01:42.250007 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 15:01:42.293727 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Jan 20 15:01:42.300047 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:01:42.300074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:01:42.300361 systemd-networkd[722]: eth0: Gained IPv6LL Jan 20 15:01:42.309978 kernel: BTRFS info (device vda6): turning on async discard Jan 20 15:01:42.310022 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 15:01:42.323695 kernel: BTRFS info (device vda6): last unmount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:01:42.326034 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 15:01:42.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.332333 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 15:01:42.469593 ignition[877]: Ignition 2.24.0 Jan 20 15:01:42.469679 ignition[877]: Stage: fetch-offline Jan 20 15:01:42.469724 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 20 15:01:42.469737 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:01:42.469816 ignition[877]: parsed url from cmdline: "" Jan 20 15:01:42.469821 ignition[877]: no config URL provided Jan 20 15:01:42.469826 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 15:01:42.469836 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 20 15:01:42.469875 ignition[877]: op(1): [started] loading QEMU firmware config module Jan 20 15:01:42.469880 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 15:01:42.500377 ignition[877]: op(1): [finished] loading QEMU firmware config module Jan 20 15:01:42.576065 ignition[877]: parsing config with SHA512: 0bc3e34e5c8ffac6e893a7dcbf86664c9cad2b2dcc063aa06414fff1691cb4afa204b24a98c345bdb5d1951758201812cd3d9c7b2653560500b9f69ca0c36c93 Jan 20 15:01:42.581958 unknown[877]: fetched base config from "system" Jan 20 15:01:42.581992 unknown[877]: fetched user config from "qemu" Jan 20 15:01:42.582389 ignition[877]: fetch-offline: fetch-offline passed Jan 20 15:01:42.582452 ignition[877]: Ignition finished successfully Jan 20 15:01:42.593928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 15:01:42.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.601367 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 15:01:42.607426 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 15:01:42.654480 ignition[886]: Ignition 2.24.0 Jan 20 15:01:42.654514 ignition[886]: Stage: kargs Jan 20 15:01:42.654737 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 20 15:01:42.654751 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:01:42.655522 ignition[886]: kargs: kargs passed Jan 20 15:01:42.655560 ignition[886]: Ignition finished successfully Jan 20 15:01:42.671512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 15:01:42.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.678793 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 15:01:42.717751 ignition[893]: Ignition 2.24.0 Jan 20 15:01:42.717783 ignition[893]: Stage: disks Jan 20 15:01:42.717929 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 20 15:01:42.717939 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:01:42.718888 ignition[893]: disks: disks passed Jan 20 15:01:42.718933 ignition[893]: Ignition finished successfully Jan 20 15:01:42.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.729700 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 15:01:42.732893 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 15:01:42.737286 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 15:01:42.743474 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 15:01:42.751414 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 15:01:42.757370 systemd[1]: Reached target basic.target - Basic System. Jan 20 15:01:42.764518 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 15:01:42.803268 systemd-fsck[902]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 15:01:42.809734 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 15:01:42.813102 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 15:01:42.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:42.956666 kernel: EXT4-fs (vda9): mounted filesystem 258d228c-90db-4a07-8ba3-cf3df974c261 r/w with ordered data mode. Quota mode: none. Jan 20 15:01:42.957544 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 15:01:42.962783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 15:01:42.966795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 15:01:42.973086 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 15:01:42.979816 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 15:01:42.979880 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 15:01:42.979903 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 15:01:43.006962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 15:01:43.013742 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 15:01:43.028540 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (910) Jan 20 15:01:43.028560 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:01:43.028572 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:01:43.036678 kernel: BTRFS info (device vda6): turning on async discard Jan 20 15:01:43.036703 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 15:01:43.038003 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 15:01:43.227895 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 15:01:43.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:43.238826 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 15:01:43.243794 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 15:01:43.276314 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 15:01:43.285045 kernel: BTRFS info (device vda6): last unmount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:01:43.296563 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 15:01:43.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:43.334696 ignition[1010]: INFO : Ignition 2.24.0 Jan 20 15:01:43.334696 ignition[1010]: INFO : Stage: mount Jan 20 15:01:43.339865 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 15:01:43.339865 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:01:43.339865 ignition[1010]: INFO : mount: mount passed Jan 20 15:01:43.339865 ignition[1010]: INFO : Ignition finished successfully Jan 20 15:01:43.353111 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 15:01:43.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:43.358481 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 15:01:43.959418 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 15:01:43.984730 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1020) Jan 20 15:01:43.984788 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:01:43.991207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:01:44.000954 kernel: BTRFS info (device vda6): turning on async discard Jan 20 15:01:44.001007 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 15:01:44.002934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 15:01:44.042081 ignition[1037]: INFO : Ignition 2.24.0 Jan 20 15:01:44.042081 ignition[1037]: INFO : Stage: files Jan 20 15:01:44.047241 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 15:01:44.047241 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:01:44.047241 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping Jan 20 15:01:44.057401 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 15:01:44.057401 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 15:01:44.057401 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 15:01:44.070142 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 15:01:44.075057 unknown[1037]: wrote ssh authorized keys file for user: core Jan 20 15:01:44.078730 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 15:01:44.078730 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 15:01:44.078730 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 15:01:44.128017 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 15:01:44.216679 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 15:01:44.216679 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:01:44.229050 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 15:01:44.646099 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 15:01:45.373692 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:01:45.373692 ignition[1037]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 15:01:45.386211 ignition[1037]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 15:01:45.433923 ignition[1037]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 15:01:45.454051 ignition[1037]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 15:01:45.461027 ignition[1037]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 15:01:45.461027 ignition[1037]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 15:01:45.461027 ignition[1037]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 15:01:45.476942 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 15:01:45.476942 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 15:01:45.476942 ignition[1037]: INFO : files: files passed Jan 20 15:01:45.476942 ignition[1037]: INFO : Ignition finished successfully Jan 20 15:01:45.508170 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 20 15:01:45.508233 kernel: audit: type=1130 audit(1768921305.482:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.483302 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 15:01:45.488773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 15:01:45.497297 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 15:01:45.527902 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 15:01:45.528068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 15:01:45.550905 kernel: audit: type=1130 audit(1768921305.529:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.550933 kernel: audit: type=1131 audit(1768921305.529:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.556336 initrd-setup-root-after-ignition[1068]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 15:01:45.563684 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 15:01:45.563684 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 15:01:45.573702 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 15:01:45.576733 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 15:01:45.580852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 15:01:45.588375 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 15:01:45.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.607733 kernel: audit: type=1130 audit(1768921305.579:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.686736 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 15:01:45.689830 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 15:01:45.712014 kernel: audit: type=1130 audit(1768921305.691:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.712049 kernel: audit: type=1131 audit(1768921305.691:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.712093 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 15:01:45.713299 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 15:01:45.724122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 15:01:45.729036 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 15:01:45.777860 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 15:01:45.791748 kernel: audit: type=1130 audit(1768921305.779:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.781256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 15:01:45.825092 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 15:01:45.825295 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 15:01:45.835874 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 15:01:45.843049 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 15:01:45.844422 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 15:01:45.844553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 15:01:45.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.858883 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 15:01:45.871591 kernel: audit: type=1131 audit(1768921305.854:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.867566 systemd[1]: Stopped target basic.target - Basic System. Jan 20 15:01:45.873422 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 15:01:45.878276 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 15:01:45.890295 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 15:01:45.891675 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 15:01:45.898431 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 15:01:45.908895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 15:01:45.910673 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 15:01:45.918259 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 15:01:45.923512 systemd[1]: Stopped target swap.target - Swaps. Jan 20 15:01:45.932016 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 15:01:45.932169 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 15:01:45.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.941487 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 15:01:45.949475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 15:01:45.961716 kernel: audit: type=1131 audit(1768921305.937:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.957694 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 15:01:45.961783 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 15:01:45.968767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 15:01:45.990963 kernel: audit: type=1131 audit(1768921305.971:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.968904 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 15:01:45.991147 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 15:01:45.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:45.991381 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 15:01:45.993510 systemd[1]: Stopped target paths.target - Path Units. Jan 20 15:01:46.002349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 15:01:46.009565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 15:01:46.017055 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 15:01:46.024305 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 15:01:46.025569 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 15:01:46.025740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 15:01:46.031291 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 15:01:46.031394 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 15:01:46.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.036585 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 15:01:46.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.036755 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 15:01:46.041449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 15:01:46.041560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 15:01:46.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.047525 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 15:01:46.047717 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 15:01:46.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.056593 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 15:01:46.064971 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 15:01:46.070392 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 15:01:46.070548 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 15:01:46.072477 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 15:01:46.072696 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 15:01:46.080482 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 15:01:46.080817 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 15:01:46.123059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 15:01:46.123299 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 15:01:46.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.149682 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 15:01:46.206239 ignition[1094]: INFO : Ignition 2.24.0 Jan 20 15:01:46.206239 ignition[1094]: INFO : Stage: umount Jan 20 15:01:46.206239 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 15:01:46.206239 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:01:46.222693 ignition[1094]: INFO : umount: umount passed Jan 20 15:01:46.225749 ignition[1094]: INFO : Ignition finished successfully Jan 20 15:01:46.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.226312 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 15:01:46.226463 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 15:01:46.233760 systemd[1]: Stopped target network.target - Network. Jan 20 15:01:46.238474 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 15:01:46.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.238569 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 15:01:46.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.249032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 15:01:46.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.249107 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 15:01:46.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.255441 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 15:01:46.255518 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 15:01:46.257348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 15:01:46.257420 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 15:01:46.266736 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 15:01:46.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.272597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 15:01:46.292866 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 15:01:46.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.293071 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 15:01:46.320000 audit: BPF prog-id=6 op=UNLOAD Jan 20 15:01:46.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.308080 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 15:01:46.308223 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 15:01:46.316920 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 15:01:46.316985 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 15:01:46.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.333998 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 15:01:46.334245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 15:01:46.344684 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 15:01:46.351689 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 15:01:46.351758 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 15:01:46.360750 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 15:01:46.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.366657 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 15:01:46.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.366716 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 15:01:46.368527 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 15:01:46.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.368579 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 15:01:46.376507 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 15:01:46.376578 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 15:01:46.389034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 15:01:46.421000 audit: BPF prog-id=9 op=UNLOAD Jan 20 15:01:46.426949 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 15:01:46.427220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 15:01:46.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.429846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 15:01:46.429894 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 15:01:46.443851 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 15:01:46.443899 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 15:01:46.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.452323 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 15:01:46.452379 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 15:01:46.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.456118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 15:01:46.456169 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 15:01:46.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.472365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 15:01:46.472425 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 15:01:46.483097 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 15:01:46.484233 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 15:01:46.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.484289 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 15:01:46.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.498791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 15:01:46.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.498846 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 15:01:46.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.500444 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 15:01:46.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.500490 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 15:01:46.501462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 15:01:46.501508 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 15:01:46.502471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 15:01:46.502517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:01:46.564338 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 15:01:46.567409 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 15:01:46.571034 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 15:01:46.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:46.571161 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 15:01:46.577825 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 15:01:46.584042 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 15:01:46.603450 systemd[1]: Switching root. Jan 20 15:01:46.656344 systemd-journald[320]: Journal stopped Jan 20 15:01:48.345020 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 20 15:01:48.345123 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 15:01:48.345145 kernel: SELinux: policy capability open_perms=1 Jan 20 15:01:48.345164 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 15:01:48.345183 kernel: SELinux: policy capability always_check_network=0 Jan 20 15:01:48.345258 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 15:01:48.345279 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 15:01:48.345302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 15:01:48.345320 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 15:01:48.345343 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 15:01:48.345367 systemd[1]: Successfully loaded SELinux policy in 90.660ms. Jan 20 15:01:48.345393 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.492ms. Jan 20 15:01:48.345468 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 15:01:48.345489 systemd[1]: Detected virtualization kvm. Jan 20 15:01:48.345514 systemd[1]: Detected architecture x86-64. Jan 20 15:01:48.345532 systemd[1]: Detected first boot. Jan 20 15:01:48.345552 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 15:01:48.345570 zram_generator::config[1139]: No configuration found. Jan 20 15:01:48.345590 kernel: Guest personality initialized and is inactive Jan 20 15:01:48.345682 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 15:01:48.345702 kernel: Initialized host personality Jan 20 15:01:48.345720 kernel: NET: Registered PF_VSOCK protocol family Jan 20 15:01:48.345740 systemd[1]: Populated /etc with preset unit settings. Jan 20 15:01:48.345758 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 15:01:48.345777 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 15:01:48.345797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 15:01:48.345824 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 15:01:48.345845 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 15:01:48.345865 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 15:01:48.345884 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 15:01:48.345904 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 15:01:48.345923 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 15:01:48.345942 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 15:01:48.346100 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 15:01:48.346120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 15:01:48.346139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 15:01:48.346158 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 15:01:48.346177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 15:01:48.346240 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 15:01:48.346261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 15:01:48.346283 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 15:01:48.346301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 15:01:48.346320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 15:01:48.346337 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 15:01:48.346355 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 15:01:48.346373 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 15:01:48.346394 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 15:01:48.346413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 15:01:48.346432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 15:01:48.346451 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 15:01:48.346468 systemd[1]: Reached target slices.target - Slice Units. Jan 20 15:01:48.346487 systemd[1]: Reached target swap.target - Swaps. Jan 20 15:01:48.346506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 15:01:48.346526 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 15:01:48.346683 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 15:01:48.346704 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 15:01:48.346723 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 15:01:48.346741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 15:01:48.346759 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 15:01:48.346778 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 15:01:48.346801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 15:01:48.346819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 15:01:48.346837 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 15:01:48.346857 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 15:01:48.346875 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 15:01:48.346892 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 15:01:48.346911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:48.346933 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 15:01:48.346952 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 15:01:48.346970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 15:01:48.346989 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 15:01:48.347006 systemd[1]: Reached target machines.target - Containers. Jan 20 15:01:48.347025 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 15:01:48.347043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:01:48.347064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 15:01:48.347129 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 15:01:48.347149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:01:48.347168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 15:01:48.347187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 15:01:48.347251 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 15:01:48.347269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 15:01:48.347291 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 15:01:48.347309 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 15:01:48.347327 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 15:01:48.347344 kernel: fuse: init (API version 7.41) Jan 20 15:01:48.347362 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 15:01:48.347380 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 15:01:48.347402 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:01:48.347420 kernel: ACPI: bus type drm_connector registered Jan 20 15:01:48.347437 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 15:01:48.347456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 15:01:48.347477 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 15:01:48.347495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 15:01:48.347513 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 15:01:48.347557 systemd-journald[1224]: Collecting audit messages is enabled. Jan 20 15:01:48.347589 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 15:01:48.347689 systemd-journald[1224]: Journal started Jan 20 15:01:48.347727 systemd-journald[1224]: Runtime Journal (/run/log/journal/664175ccd5e645ae931d8eb5935c8b51) is 6M, max 48.2M, 42.1M free. Jan 20 15:01:47.933000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 15:01:48.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.275000 audit: BPF prog-id=14 op=UNLOAD Jan 20 15:01:48.275000 audit: BPF prog-id=13 op=UNLOAD Jan 20 15:01:48.277000 audit: BPF prog-id=15 op=LOAD Jan 20 15:01:48.279000 audit: BPF prog-id=16 op=LOAD Jan 20 15:01:48.279000 audit: BPF prog-id=17 op=LOAD Jan 20 15:01:48.341000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 15:01:48.341000 audit[1224]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffce0a78700 a2=4000 a3=0 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:01:48.341000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 15:01:47.587836 systemd[1]: Queued start job for default target multi-user.target. Jan 20 15:01:47.608968 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 15:01:47.609723 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 15:01:47.610286 systemd[1]: systemd-journald.service: Consumed 1.134s CPU time. Jan 20 15:01:48.361668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:48.370443 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 15:01:48.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.371971 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 15:01:48.375366 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 15:01:48.378870 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 15:01:48.382032 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 15:01:48.385529 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 15:01:48.389187 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 15:01:48.392569 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 15:01:48.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.396736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 15:01:48.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.400979 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 15:01:48.401251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 15:01:48.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.405253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:01:48.405955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:01:48.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.409964 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 15:01:48.410291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 15:01:48.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.414111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 15:01:48.414440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 15:01:48.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.418908 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 15:01:48.419162 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 15:01:48.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.423427 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 15:01:48.423985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 15:01:48.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.428262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 15:01:48.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.432871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 15:01:48.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.438023 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 15:01:48.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.442684 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 15:01:48.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.461992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 15:01:48.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.466925 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 15:01:48.470912 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 15:01:48.476704 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 15:01:48.481440 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 15:01:48.485162 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 15:01:48.485254 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 15:01:48.489492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 15:01:48.493877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:01:48.494030 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:01:48.496857 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 15:01:48.501746 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 15:01:48.505464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 15:01:48.506789 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 15:01:48.510798 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 15:01:48.513784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 15:01:48.519722 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 15:01:48.523092 systemd-journald[1224]: Time spent on flushing to /var/log/journal/664175ccd5e645ae931d8eb5935c8b51 is 18.659ms for 1105 entries. Jan 20 15:01:48.523092 systemd-journald[1224]: System Journal (/var/log/journal/664175ccd5e645ae931d8eb5935c8b51) is 8M, max 163.5M, 155.5M free. Jan 20 15:01:48.560513 systemd-journald[1224]: Received client request to flush runtime journal. Jan 20 15:01:48.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.530770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 15:01:48.538459 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 15:01:48.543822 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 15:01:48.548994 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 15:01:48.556897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 15:01:48.563374 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 15:01:48.571714 kernel: loop1: detected capacity change from 0 to 171112 Jan 20 15:01:48.571048 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 15:01:48.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.577241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 15:01:48.580698 kernel: loop1: p1 p2 p3 Jan 20 15:01:48.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.596133 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 20 15:01:48.596162 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 20 15:01:48.602350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 15:01:48.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.608536 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 15:01:48.613804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 15:01:48.615546 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 15:01:48.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.672669 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Jan 20 15:01:48.691694 kernel: loop2: detected capacity change from 0 to 375256 Jan 20 15:01:48.696742 kernel: loop2: p1 p2 p3 Jan 20 15:01:48.704558 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 15:01:48.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.711000 audit: BPF prog-id=18 op=LOAD Jan 20 15:01:48.711000 audit: BPF prog-id=19 op=LOAD Jan 20 15:01:48.711000 audit: BPF prog-id=20 op=LOAD Jan 20 15:01:48.713244 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 15:01:48.717000 audit: BPF prog-id=21 op=LOAD Jan 20 15:01:48.720383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 15:01:48.726786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 15:01:48.730724 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Jan 20 15:01:48.731000 audit: BPF prog-id=22 op=LOAD Jan 20 15:01:48.732000 audit: BPF prog-id=23 op=LOAD Jan 20 15:01:48.732000 audit: BPF prog-id=24 op=LOAD Jan 20 15:01:48.734755 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 15:01:48.747000 audit: BPF prog-id=25 op=LOAD Jan 20 15:01:48.747000 audit: BPF prog-id=26 op=LOAD Jan 20 15:01:48.748000 audit: BPF prog-id=27 op=LOAD Jan 20 15:01:48.749446 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 15:01:48.762651 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 15:01:48.777424 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Jan 20 15:01:48.777931 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Jan 20 15:01:48.783886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 15:01:48.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.796299 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 15:01:48.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.807731 kernel: loop4: detected capacity change from 0 to 171112 Jan 20 15:01:48.811723 kernel: loop4: p1 p2 p3 Jan 20 15:01:48.813595 systemd-nsresourced[1284]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 15:01:48.815439 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 15:01:48.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.847836 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:01:48.847906 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 15:01:48.847926 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Jan 20 15:01:48.853037 kernel: device-mapper: ioctl: error adding target to table Jan 20 15:01:48.853122 (sd-merge)[1294]: device-mapper: reload ioctl on 8c7c96915202989b4a0dcbd1acd80ba2f75612a91a267e360f9baafdceea3d6f-verity (253:1) failed: Invalid argument Jan 20 15:01:48.864668 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:01:48.897450 systemd-oomd[1280]: No swap; memory pressure usage will be degraded Jan 20 15:01:48.898324 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 15:01:48.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.926913 systemd-resolved[1281]: Positive Trust Anchors: Jan 20 15:01:48.926950 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 15:01:48.926955 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 15:01:48.926982 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 15:01:48.930717 systemd-resolved[1281]: Defaulting to hostname 'linux'. Jan 20 15:01:48.932801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 15:01:48.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:48.936936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 15:01:49.773951 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 15:01:49.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:49.779000 audit: BPF prog-id=8 op=UNLOAD Jan 20 15:01:49.779000 audit: BPF prog-id=7 op=UNLOAD Jan 20 15:01:49.780000 audit: BPF prog-id=28 op=LOAD Jan 20 15:01:49.780000 audit: BPF prog-id=29 op=LOAD Jan 20 15:01:49.782466 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 15:01:49.857059 systemd-udevd[1309]: Using default interface naming scheme 'v257'. Jan 20 15:01:49.891670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 15:01:49.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:49.896000 audit: BPF prog-id=30 op=LOAD Jan 20 15:01:49.899692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 15:01:49.992004 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 15:01:50.016425 systemd-networkd[1315]: lo: Link UP Jan 20 15:01:50.016903 systemd-networkd[1315]: lo: Gained carrier Jan 20 15:01:50.019769 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 15:01:50.021772 systemd-networkd[1315]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:01:50.021953 systemd-networkd[1315]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 15:01:50.023072 systemd-networkd[1315]: eth0: Link UP Jan 20 15:01:50.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:50.023939 systemd-networkd[1315]: eth0: Gained carrier Jan 20 15:01:50.024038 systemd-networkd[1315]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:01:50.030050 systemd[1]: Reached target network.target - Network. Jan 20 15:01:50.039779 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 15:01:50.040667 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 15:01:50.040815 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 15:01:50.048038 systemd-networkd[1315]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 15:01:50.055426 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 15:01:50.062661 kernel: ACPI: button: Power Button [PWRF] Jan 20 15:01:50.084086 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 15:01:50.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:50.102736 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 15:01:50.103418 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 15:01:50.125051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 15:01:50.136075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 15:01:50.182785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 15:01:50.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:50.272935 kernel: hrtimer: interrupt took 3337841 ns Jan 20 15:01:50.428907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:01:50.533719 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Jan 20 15:01:50.540692 kernel: loop5: detected capacity change from 0 to 375256 Jan 20 15:01:50.548643 kernel: loop5: p1 p2 p3 Jan 20 15:01:50.591779 (sd-merge)[1294]: device-mapper: reload ioctl on 843577122f2bcae09e086c1955c04b6b28388e52152c2016187e408266e84aa6-verity (253:2) failed: Invalid argument Jan 20 15:01:50.593513 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:01:50.593559 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 15:01:50.593578 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Jan 20 15:01:50.593596 kernel: device-mapper: ioctl: error adding target to table Jan 20 15:01:50.595662 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:01:50.611731 kernel: kvm_amd: TSC scaling supported Jan 20 15:01:50.611928 kernel: kvm_amd: Nested Virtualization enabled Jan 20 15:01:50.612021 kernel: kvm_amd: Nested Paging enabled Jan 20 15:01:50.612058 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 15:01:50.612090 kernel: kvm_amd: PMU virtualization is disabled Jan 20 15:01:50.688075 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Jan 20 15:01:50.692788 kernel: loop6: detected capacity change from 0 to 224512 Jan 20 15:01:50.714960 kernel: EDAC MC: Ver: 3.0.0 Jan 20 15:01:50.734758 (sd-merge)[1294]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 15:01:50.745520 (sd-merge)[1294]: Merged extensions into '/usr'. Jan 20 15:01:50.750042 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 15:01:50.750084 systemd[1]: Reloading... Jan 20 15:01:50.831750 zram_generator::config[1408]: No configuration found. Jan 20 15:01:51.068887 systemd-networkd[1315]: eth0: Gained IPv6LL Jan 20 15:01:51.120030 systemd[1]: Reloading finished in 368 ms. Jan 20 15:01:51.163758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 15:01:51.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.168920 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 15:01:51.170951 kernel: kauditd_printk_skb: 112 callbacks suppressed Jan 20 15:01:51.171012 kernel: audit: type=1130 audit(1768921311.167:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.194331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:01:51.194679 kernel: audit: type=1130 audit(1768921311.183:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.207681 kernel: audit: type=1130 audit(1768921311.197:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.211324 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 15:01:51.234889 systemd[1]: Starting ensure-sysext.service... Jan 20 15:01:51.239486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 15:01:51.242000 audit: BPF prog-id=31 op=LOAD Jan 20 15:01:51.246659 kernel: audit: type=1334 audit(1768921311.242:160): prog-id=31 op=LOAD Jan 20 15:01:51.242000 audit: BPF prog-id=25 op=UNLOAD Jan 20 15:01:51.250778 kernel: audit: type=1334 audit(1768921311.242:161): prog-id=25 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=32 op=LOAD Jan 20 15:01:51.265282 kernel: audit: type=1334 audit(1768921311.243:162): prog-id=32 op=LOAD Jan 20 15:01:51.267395 kernel: audit: type=1334 audit(1768921311.243:163): prog-id=33 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=33 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=26 op=UNLOAD Jan 20 15:01:51.276498 kernel: audit: type=1334 audit(1768921311.243:164): prog-id=26 op=UNLOAD Jan 20 15:01:51.293894 kernel: audit: type=1334 audit(1768921311.243:165): prog-id=27 op=UNLOAD Jan 20 15:01:51.293947 kernel: audit: type=1334 audit(1768921311.243:166): prog-id=34 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=27 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=34 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=22 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=35 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=36 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=23 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=24 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=37 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=38 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=28 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=29 op=UNLOAD Jan 20 15:01:51.243000 audit: BPF prog-id=39 op=LOAD Jan 20 15:01:51.243000 audit: BPF prog-id=21 op=UNLOAD Jan 20 15:01:51.247000 audit: BPF prog-id=40 op=LOAD Jan 20 15:01:51.247000 audit: BPF prog-id=18 op=UNLOAD Jan 20 15:01:51.247000 audit: BPF prog-id=41 op=LOAD Jan 20 15:01:51.247000 audit: BPF prog-id=42 op=LOAD Jan 20 15:01:51.247000 audit: BPF prog-id=19 op=UNLOAD Jan 20 15:01:51.247000 audit: BPF prog-id=20 op=UNLOAD Jan 20 15:01:51.249000 audit: BPF prog-id=43 op=LOAD Jan 20 15:01:51.249000 audit: BPF prog-id=15 op=UNLOAD Jan 20 15:01:51.250000 audit: BPF prog-id=44 op=LOAD Jan 20 15:01:51.250000 audit: BPF prog-id=45 op=LOAD Jan 20 15:01:51.250000 audit: BPF prog-id=16 op=UNLOAD Jan 20 15:01:51.250000 audit: BPF prog-id=17 op=UNLOAD Jan 20 15:01:51.251000 audit: BPF prog-id=46 op=LOAD Jan 20 15:01:51.251000 audit: BPF prog-id=30 op=UNLOAD Jan 20 15:01:51.296036 systemd[1]: Reload requested from client PID 1446 ('systemctl') (unit ensure-sysext.service)... Jan 20 15:01:51.296081 systemd[1]: Reloading... Jan 20 15:01:51.458791 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 15:01:51.458876 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 15:01:51.459183 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 15:01:51.460700 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 20 15:01:51.460777 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 20 15:01:51.468505 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 15:01:51.468539 systemd-tmpfiles[1447]: Skipping /boot Jan 20 15:01:51.485423 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 15:01:51.485491 systemd-tmpfiles[1447]: Skipping /boot Jan 20 15:01:51.527759 zram_generator::config[1483]: No configuration found. Jan 20 15:01:51.748825 systemd[1]: Reloading finished in 451 ms. Jan 20 15:01:51.774000 audit: BPF prog-id=47 op=LOAD Jan 20 15:01:51.774000 audit: BPF prog-id=43 op=UNLOAD Jan 20 15:01:51.774000 audit: BPF prog-id=48 op=LOAD Jan 20 15:01:51.774000 audit: BPF prog-id=49 op=LOAD Jan 20 15:01:51.774000 audit: BPF prog-id=44 op=UNLOAD Jan 20 15:01:51.774000 audit: BPF prog-id=45 op=UNLOAD Jan 20 15:01:51.776000 audit: BPF prog-id=50 op=LOAD Jan 20 15:01:51.776000 audit: BPF prog-id=39 op=UNLOAD Jan 20 15:01:51.778000 audit: BPF prog-id=51 op=LOAD Jan 20 15:01:51.778000 audit: BPF prog-id=34 op=UNLOAD Jan 20 15:01:51.778000 audit: BPF prog-id=52 op=LOAD Jan 20 15:01:51.778000 audit: BPF prog-id=53 op=LOAD Jan 20 15:01:51.778000 audit: BPF prog-id=35 op=UNLOAD Jan 20 15:01:51.778000 audit: BPF prog-id=36 op=UNLOAD Jan 20 15:01:51.779000 audit: BPF prog-id=54 op=LOAD Jan 20 15:01:51.790000 audit: BPF prog-id=40 op=UNLOAD Jan 20 15:01:51.790000 audit: BPF prog-id=55 op=LOAD Jan 20 15:01:51.790000 audit: BPF prog-id=56 op=LOAD Jan 20 15:01:51.790000 audit: BPF prog-id=41 op=UNLOAD Jan 20 15:01:51.790000 audit: BPF prog-id=42 op=UNLOAD Jan 20 15:01:51.792000 audit: BPF prog-id=57 op=LOAD Jan 20 15:01:51.792000 audit: BPF prog-id=46 op=UNLOAD Jan 20 15:01:51.792000 audit: BPF prog-id=58 op=LOAD Jan 20 15:01:51.793000 audit: BPF prog-id=59 op=LOAD Jan 20 15:01:51.793000 audit: BPF prog-id=37 op=UNLOAD Jan 20 15:01:51.793000 audit: BPF prog-id=38 op=UNLOAD Jan 20 15:01:51.794000 audit: BPF prog-id=60 op=LOAD Jan 20 15:01:51.794000 audit: BPF prog-id=31 op=UNLOAD Jan 20 15:01:51.794000 audit: BPF prog-id=61 op=LOAD Jan 20 15:01:51.795000 audit: BPF prog-id=62 op=LOAD Jan 20 15:01:51.795000 audit: BPF prog-id=32 op=UNLOAD Jan 20 15:01:51.795000 audit: BPF prog-id=33 op=UNLOAD Jan 20 15:01:51.799298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 15:01:51.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.815682 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 15:01:51.820299 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 15:01:51.830797 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 15:01:51.837879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 15:01:51.843746 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 15:01:51.851158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:51.851397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:01:51.860448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:01:51.866881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 15:01:51.919425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 15:01:51.925360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:01:51.928110 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:01:51.932990 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:01:51.933987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:51.950388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:51.950584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:01:51.953696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:01:51.953981 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:01:51.954062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:01:51.954176 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:51.955018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:01:51.955384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:01:51.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.985330 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 15:01:51.985693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 15:01:51.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:51.991561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 15:01:51.993000 audit[1524]: SYSTEM_BOOT pid=1524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 15:01:52.001439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 15:01:52.001981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 15:01:52.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:52.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:01:52.010456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:52.011085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:01:52.015118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:01:52.023095 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 15:01:52.026000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 15:01:52.027439 augenrules[1548]: No rules Jan 20 15:01:52.026000 audit[1548]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd75791f0 a2=420 a3=0 items=0 ppid=1517 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:01:52.026000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 15:01:52.031265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 15:01:52.035171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:01:52.035835 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:01:52.036298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:01:52.036886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 15:01:52.037326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:01:52.042365 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 15:01:52.042952 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 15:01:52.049199 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 15:01:52.056075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:01:52.056668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:01:52.062397 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 15:01:52.062857 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 15:01:52.068111 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 15:01:52.068532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 15:01:52.077901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 15:01:52.086550 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 15:01:52.097978 systemd[1]: Finished ensure-sysext.service. Jan 20 15:01:52.109419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 15:01:52.111775 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 15:01:52.116293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 15:01:52.391485 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 15:01:52.397162 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 15:01:52.399731 systemd-timesyncd[1563]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 15:01:52.399783 systemd-timesyncd[1563]: Initial clock synchronization to Tue 2026-01-20 15:01:52.588765 UTC. Jan 20 15:01:53.129678 ldconfig[1519]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 15:01:53.137270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 15:01:53.164715 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 15:01:53.296508 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 15:01:53.302890 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 15:01:53.307104 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 15:01:53.311790 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 15:01:53.316341 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 15:01:53.320956 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 15:01:53.325182 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 15:01:53.329906 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 15:01:53.334580 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 15:01:53.338455 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 15:01:53.342724 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 15:01:53.342821 systemd[1]: Reached target paths.target - Path Units. Jan 20 15:01:53.345963 systemd[1]: Reached target timers.target - Timer Units. Jan 20 15:01:53.351314 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 15:01:53.357735 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 15:01:53.367150 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 15:01:53.371418 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 15:01:53.375806 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 15:01:53.383731 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 15:01:53.387466 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 15:01:53.392910 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 15:01:53.401410 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 15:01:53.404545 systemd[1]: Reached target basic.target - Basic System. Jan 20 15:01:53.407559 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 15:01:53.407620 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 15:01:53.409046 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 15:01:53.413593 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 15:01:53.418186 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 15:01:53.428513 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 15:01:53.444763 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 15:01:53.449982 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 15:01:53.454827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 15:01:53.478530 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 15:01:53.512280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:01:53.514789 jq[1576]: false Jan 20 15:01:53.531474 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 15:01:53.540182 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 15:01:53.545750 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 15:01:53.564730 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 15:01:53.581110 extend-filesystems[1577]: Found /dev/vda6 Jan 20 15:01:53.581110 extend-filesystems[1577]: Found /dev/vda9 Jan 20 15:01:53.592200 extend-filesystems[1577]: Checking size of /dev/vda9 Jan 20 15:01:53.585357 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 15:01:53.595486 oslogin_cache_refresh[1578]: Refreshing passwd entry cache Jan 20 15:01:53.604826 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing passwd entry cache Jan 20 15:01:53.609122 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 15:01:53.613002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 15:01:53.613916 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 15:01:53.615321 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 15:01:53.624988 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 15:01:53.632140 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting users, quitting Jan 20 15:01:53.632140 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 15:01:53.632114 oslogin_cache_refresh[1578]: Failure getting users, quitting Jan 20 15:01:53.632405 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing group entry cache Jan 20 15:01:53.632142 oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 15:01:53.632217 oslogin_cache_refresh[1578]: Refreshing group entry cache Jan 20 15:01:53.634036 extend-filesystems[1577]: Resized partition /dev/vda9 Jan 20 15:01:53.648936 oslogin_cache_refresh[1578]: Failure getting groups, quitting Jan 20 15:01:53.644434 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 15:01:53.652447 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting groups, quitting Jan 20 15:01:53.652447 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 15:01:53.648950 oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 15:01:53.648987 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 15:01:53.649293 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 15:01:53.651110 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 15:01:53.651424 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 15:01:53.656242 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 15:01:53.656549 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 15:01:53.661134 extend-filesystems[1611]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 15:01:53.665455 update_engine[1602]: I20260120 15:01:53.661382 1602 main.cc:92] Flatcar Update Engine starting Jan 20 15:01:53.666276 jq[1606]: true Jan 20 15:01:53.672706 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 15:01:53.671203 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 15:01:53.671491 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 15:01:53.678021 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 15:01:53.709099 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 15:01:53.710153 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 15:01:53.714359 jq[1621]: true Jan 20 15:01:53.733101 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 15:01:53.745106 tar[1613]: linux-amd64/LICENSE Jan 20 15:01:53.748163 tar[1613]: linux-amd64/helm Jan 20 15:01:53.766140 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 15:01:53.921611 systemd-logind[1598]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 15:01:53.951699 extend-filesystems[1611]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 15:01:53.951699 extend-filesystems[1611]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 15:01:53.951699 extend-filesystems[1611]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 15:01:53.921751 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 15:01:53.975085 bash[1665]: Updated "/home/core/.ssh/authorized_keys" Jan 20 15:01:53.975202 extend-filesystems[1577]: Resized filesystem in /dev/vda9 Jan 20 15:01:53.929964 systemd-logind[1598]: New seat seat0. Jan 20 15:01:53.950465 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 15:01:53.950997 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 15:01:53.954015 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 15:01:53.961219 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 15:01:53.962229 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 15:01:54.004353 dbus-daemon[1574]: [system] SELinux support is enabled Jan 20 15:01:54.005021 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 15:01:54.014808 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 15:01:54.014897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 15:01:54.019249 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 15:01:54.019298 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 15:01:54.038786 systemd[1]: Started update-engine.service - Update Engine. Jan 20 15:01:54.041464 update_engine[1602]: I20260120 15:01:54.034751 1602 update_check_scheduler.cc:74] Next update check in 4m34s Jan 20 15:01:54.049224 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 15:01:54.353569 sshd_keygen[1618]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 15:01:54.439587 locksmithd[1671]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 15:01:54.460520 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 15:01:54.478089 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 15:01:54.515458 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 15:01:54.516118 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 15:01:54.527828 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 15:01:54.563273 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 15:01:54.572352 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 15:01:54.582519 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 15:01:54.587543 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 15:01:54.663403 containerd[1623]: time="2026-01-20T15:01:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 15:01:54.665089 containerd[1623]: time="2026-01-20T15:01:54.664887402Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678593264Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.123µs" Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678667750Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678744670Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678757167Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678902857Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678916601Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678975707Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 15:01:54.679028 containerd[1623]: time="2026-01-20T15:01:54.678985677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 15:01:54.680489 containerd[1623]: time="2026-01-20T15:01:54.680283615Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 15:01:54.680489 containerd[1623]: time="2026-01-20T15:01:54.680327372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 15:01:54.680489 containerd[1623]: time="2026-01-20T15:01:54.680340379Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 15:01:54.680489 containerd[1623]: time="2026-01-20T15:01:54.680348540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 15:01:54.680715 containerd[1623]: time="2026-01-20T15:01:54.680607871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 15:01:54.680872 containerd[1623]: time="2026-01-20T15:01:54.680781744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 15:01:54.681172 containerd[1623]: time="2026-01-20T15:01:54.681063114Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 15:01:54.681172 containerd[1623]: time="2026-01-20T15:01:54.681163645Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 15:01:54.681219 containerd[1623]: time="2026-01-20T15:01:54.681175589Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 15:01:54.682686 containerd[1623]: time="2026-01-20T15:01:54.682520679Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 15:01:54.683449 containerd[1623]: time="2026-01-20T15:01:54.683341287Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 15:01:54.683449 containerd[1623]: time="2026-01-20T15:01:54.683436102Z" level=info msg="metadata content store policy set" policy=shared Jan 20 15:01:54.695156 containerd[1623]: time="2026-01-20T15:01:54.695048705Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 15:01:54.695156 containerd[1623]: time="2026-01-20T15:01:54.695125922Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 15:01:54.695242 containerd[1623]: time="2026-01-20T15:01:54.695207239Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 15:01:54.695242 containerd[1623]: time="2026-01-20T15:01:54.695219449Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 15:01:54.695242 containerd[1623]: time="2026-01-20T15:01:54.695231515Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 15:01:54.695307 containerd[1623]: time="2026-01-20T15:01:54.695275487Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 15:01:54.695307 containerd[1623]: time="2026-01-20T15:01:54.695288044Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 15:01:54.695307 containerd[1623]: time="2026-01-20T15:01:54.695297187Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 15:01:54.695353 containerd[1623]: time="2026-01-20T15:01:54.695308058Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 15:01:54.695353 containerd[1623]: time="2026-01-20T15:01:54.695331210Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 15:01:54.695353 containerd[1623]: time="2026-01-20T15:01:54.695341466Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 15:01:54.695353 containerd[1623]: time="2026-01-20T15:01:54.695350690Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 15:01:54.695417 containerd[1623]: time="2026-01-20T15:01:54.695359076Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 15:01:54.695417 containerd[1623]: time="2026-01-20T15:01:54.695370047Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 15:01:54.695593 containerd[1623]: time="2026-01-20T15:01:54.695536957Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 15:01:54.695593 containerd[1623]: time="2026-01-20T15:01:54.695557838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 15:01:54.695593 containerd[1623]: time="2026-01-20T15:01:54.695570580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 15:01:54.695593 containerd[1623]: time="2026-01-20T15:01:54.695586686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 15:01:54.695719 containerd[1623]: time="2026-01-20T15:01:54.695600686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 15:01:54.695719 containerd[1623]: time="2026-01-20T15:01:54.695609428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 15:01:54.695757 containerd[1623]: time="2026-01-20T15:01:54.695718714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 15:01:54.695996 containerd[1623]: time="2026-01-20T15:01:54.695854832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 15:01:54.695996 containerd[1623]: time="2026-01-20T15:01:54.695908068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 15:01:54.695996 containerd[1623]: time="2026-01-20T15:01:54.695922702Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 15:01:54.695996 containerd[1623]: time="2026-01-20T15:01:54.695932274Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 15:01:54.696075 containerd[1623]: time="2026-01-20T15:01:54.696055395Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 15:01:54.696512 containerd[1623]: time="2026-01-20T15:01:54.696409665Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 15:01:54.696512 containerd[1623]: time="2026-01-20T15:01:54.696510954Z" level=info msg="Start snapshots syncer" Jan 20 15:01:54.697394 containerd[1623]: time="2026-01-20T15:01:54.696948412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 15:01:54.697394 containerd[1623]: time="2026-01-20T15:01:54.697274815Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 15:01:54.697713 containerd[1623]: time="2026-01-20T15:01:54.697333748Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 15:01:54.698974 containerd[1623]: time="2026-01-20T15:01:54.698953744Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 15:01:54.699154 containerd[1623]: time="2026-01-20T15:01:54.699136697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 15:01:54.699218 containerd[1623]: time="2026-01-20T15:01:54.699205478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 15:01:54.699264 containerd[1623]: time="2026-01-20T15:01:54.699253294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 15:01:54.699304 containerd[1623]: time="2026-01-20T15:01:54.699294648Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 15:01:54.699354 containerd[1623]: time="2026-01-20T15:01:54.699337485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 15:01:54.699414 containerd[1623]: time="2026-01-20T15:01:54.699399261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 15:01:54.699499 containerd[1623]: time="2026-01-20T15:01:54.699478553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 15:01:54.699799 containerd[1623]: time="2026-01-20T15:01:54.699775067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 15:01:54.699927 containerd[1623]: time="2026-01-20T15:01:54.699911585Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 15:01:54.700028 containerd[1623]: time="2026-01-20T15:01:54.700013886Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 15:01:54.700081 containerd[1623]: time="2026-01-20T15:01:54.700068943Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 15:01:54.700120 containerd[1623]: time="2026-01-20T15:01:54.700109938Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 15:01:54.700161 containerd[1623]: time="2026-01-20T15:01:54.700150372Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 15:01:54.700215 containerd[1623]: time="2026-01-20T15:01:54.700202576Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 15:01:54.700790 containerd[1623]: time="2026-01-20T15:01:54.700251089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 15:01:54.700890 containerd[1623]: time="2026-01-20T15:01:54.700875437Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 15:01:54.700945 containerd[1623]: time="2026-01-20T15:01:54.700934533Z" level=info msg="runtime interface created" Jan 20 15:01:54.700981 containerd[1623]: time="2026-01-20T15:01:54.700972452Z" level=info msg="created NRI interface" Jan 20 15:01:54.701017 containerd[1623]: time="2026-01-20T15:01:54.701007537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 15:01:54.701100 containerd[1623]: time="2026-01-20T15:01:54.701086911Z" level=info msg="Connect containerd service" Jan 20 15:01:54.701248 containerd[1623]: time="2026-01-20T15:01:54.701148248Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 15:01:54.703081 containerd[1623]: time="2026-01-20T15:01:54.702984513Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 15:01:54.759971 tar[1613]: linux-amd64/README.md Jan 20 15:01:54.807391 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 15:01:54.951982 containerd[1623]: time="2026-01-20T15:01:54.951307016Z" level=info msg="Start subscribing containerd event" Jan 20 15:01:54.951982 containerd[1623]: time="2026-01-20T15:01:54.951379058Z" level=info msg="Start recovering state" Jan 20 15:01:54.951982 containerd[1623]: time="2026-01-20T15:01:54.951740784Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 15:01:54.951982 containerd[1623]: time="2026-01-20T15:01:54.951803929Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 15:01:54.963220 containerd[1623]: time="2026-01-20T15:01:54.963080678Z" level=info msg="Start event monitor" Jan 20 15:01:54.963220 containerd[1623]: time="2026-01-20T15:01:54.963207911Z" level=info msg="Start cni network conf syncer for default" Jan 20 15:01:54.963326 containerd[1623]: time="2026-01-20T15:01:54.963231614Z" level=info msg="Start streaming server" Jan 20 15:01:54.963326 containerd[1623]: time="2026-01-20T15:01:54.963247607Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 15:01:54.964672 containerd[1623]: time="2026-01-20T15:01:54.964209358Z" level=info msg="runtime interface starting up..." Jan 20 15:01:54.964672 containerd[1623]: time="2026-01-20T15:01:54.964235680Z" level=info msg="starting plugins..." Jan 20 15:01:54.964672 containerd[1623]: time="2026-01-20T15:01:54.964258535Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 15:01:54.966035 containerd[1623]: time="2026-01-20T15:01:54.964830364Z" level=info msg="containerd successfully booted in 0.302551s" Jan 20 15:01:54.965129 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 15:01:55.346493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:01:55.351059 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 15:01:55.352867 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:01:55.356375 systemd[1]: Startup finished in 3.714s (kernel) + 7.874s (initrd) + 8.567s (userspace) = 20.156s. Jan 20 15:01:55.863381 kubelet[1724]: E0120 15:01:55.863186 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:01:55.867303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:01:55.867597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:01:55.868390 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 265.2M memory peak. Jan 20 15:02:01.571544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 15:02:01.573102 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:43634.service - OpenSSH per-connection server daemon (10.0.0.1:43634). Jan 20 15:02:01.693896 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 43634 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:02:01.697332 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:02:01.709204 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 15:02:01.744281 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 15:02:01.752865 systemd-logind[1598]: New session 1 of user core. Jan 20 15:02:01.777918 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 15:02:01.782872 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 15:02:01.805563 (systemd)[1744]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:02:01.810184 systemd-logind[1598]: New session 2 of user core. Jan 20 15:02:01.994068 systemd[1744]: Queued start job for default target default.target. Jan 20 15:02:02.014284 systemd[1744]: Created slice app.slice - User Application Slice. Jan 20 15:02:02.014345 systemd[1744]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 15:02:02.014359 systemd[1744]: Reached target paths.target - Paths. Jan 20 15:02:02.014436 systemd[1744]: Reached target timers.target - Timers. Jan 20 15:02:02.016207 systemd[1744]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 15:02:02.017521 systemd[1744]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 15:02:02.035855 systemd[1744]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 15:02:02.035989 systemd[1744]: Reached target sockets.target - Sockets. Jan 20 15:02:02.038533 systemd[1744]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 15:02:02.038791 systemd[1744]: Reached target basic.target - Basic System. Jan 20 15:02:02.038912 systemd[1744]: Reached target default.target - Main User Target. Jan 20 15:02:02.038987 systemd[1744]: Startup finished in 220ms. Jan 20 15:02:02.039255 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 15:02:02.053845 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 15:02:02.075358 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:43638.service - OpenSSH per-connection server daemon (10.0.0.1:43638). Jan 20 15:02:02.158303 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 43638 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:02:02.161187 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:02:02.169453 systemd-logind[1598]: New session 3 of user core. Jan 20 15:02:02.178860 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 15:02:02.201735 sshd[1762]: Connection closed by 10.0.0.1 port 43638 Jan 20 15:02:02.202153 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 20 15:02:02.225171 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:43638.service: Deactivated successfully. Jan 20 15:02:02.228060 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 15:02:02.229467 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Jan 20 15:02:02.233733 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:43648.service - OpenSSH per-connection server daemon (10.0.0.1:43648). Jan 20 15:02:02.234752 systemd-logind[1598]: Removed session 3. Jan 20 15:02:02.601425 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 43648 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:02:02.603370 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:02:02.611191 systemd-logind[1598]: New session 4 of user core. Jan 20 15:02:02.620924 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 15:02:02.634207 sshd[1773]: Connection closed by 10.0.0.1 port 43648 Jan 20 15:02:02.634715 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jan 20 15:02:02.644447 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:43648.service: Deactivated successfully. Jan 20 15:02:02.647027 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 15:02:02.648337 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Jan 20 15:02:02.651986 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:58822.service - OpenSSH per-connection server daemon (10.0.0.1:58822). Jan 20 15:02:02.653085 systemd-logind[1598]: Removed session 4. Jan 20 15:02:02.719660 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 58822 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:02:02.722139 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:02:02.728870 systemd-logind[1598]: New session 5 of user core. Jan 20 15:02:02.737861 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 15:02:02.758675 sshd[1784]: Connection closed by 10.0.0.1 port 58822 Jan 20 15:02:02.759182 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 20 15:02:02.768025 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:58822.service: Deactivated successfully. Jan 20 15:02:02.770297 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 15:02:02.771373 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Jan 20 15:02:02.774558 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:58828.service - OpenSSH per-connection server daemon (10.0.0.1:58828). Jan 20 15:02:02.775725 systemd-logind[1598]: Removed session 5. Jan 20 15:02:02.848120 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 58828 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:02:02.850103 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:02:02.856079 systemd-logind[1598]: New session 6 of user core. Jan 20 15:02:02.865880 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 15:02:02.892591 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 15:02:02.893223 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 15:02:05.566601 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 15:02:05.607261 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 15:02:06.181104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 15:02:06.184172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:06.706580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:06.728488 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:02:07.113323 kubelet[1833]: E0120 15:02:07.113075 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:02:07.119898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:02:07.120112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:02:07.120851 systemd[1]: kubelet.service: Consumed 706ms CPU time, 111.1M memory peak. Jan 20 15:02:07.459471 dockerd[1819]: time="2026-01-20T15:02:07.458905046Z" level=info msg="Starting up" Jan 20 15:02:07.468769 dockerd[1819]: time="2026-01-20T15:02:07.468424408Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 15:02:07.529277 dockerd[1819]: time="2026-01-20T15:02:07.529123095Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 15:02:07.821401 dockerd[1819]: time="2026-01-20T15:02:07.820522847Z" level=info msg="Loading containers: start." Jan 20 15:02:07.846742 kernel: Initializing XFRM netlink socket Jan 20 15:02:08.444724 systemd-networkd[1315]: docker0: Link UP Jan 20 15:02:08.456593 dockerd[1819]: time="2026-01-20T15:02:08.456474170Z" level=info msg="Loading containers: done." Jan 20 15:02:08.490800 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3536235715-merged.mount: Deactivated successfully. Jan 20 15:02:08.494922 dockerd[1819]: time="2026-01-20T15:02:08.494837964Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 15:02:08.495291 dockerd[1819]: time="2026-01-20T15:02:08.495058717Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 15:02:08.495321 dockerd[1819]: time="2026-01-20T15:02:08.495292093Z" level=info msg="Initializing buildkit" Jan 20 15:02:08.562931 dockerd[1819]: time="2026-01-20T15:02:08.562750130Z" level=info msg="Completed buildkit initialization" Jan 20 15:02:08.580360 dockerd[1819]: time="2026-01-20T15:02:08.580199487Z" level=info msg="Daemon has completed initialization" Jan 20 15:02:08.580873 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 15:02:08.581098 dockerd[1819]: time="2026-01-20T15:02:08.580698846Z" level=info msg="API listen on /run/docker.sock" Jan 20 15:02:10.229563 containerd[1623]: time="2026-01-20T15:02:10.229263920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 15:02:11.109280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448534706.mount: Deactivated successfully. Jan 20 15:02:15.224443 containerd[1623]: time="2026-01-20T15:02:15.224190386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:15.225370 containerd[1623]: time="2026-01-20T15:02:15.225336309Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28378812" Jan 20 15:02:15.227328 containerd[1623]: time="2026-01-20T15:02:15.227279502Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:15.234202 containerd[1623]: time="2026-01-20T15:02:15.234043647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:15.236284 containerd[1623]: time="2026-01-20T15:02:15.236183927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 5.006731499s" Jan 20 15:02:15.236284 containerd[1623]: time="2026-01-20T15:02:15.236245769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 15:02:15.239869 containerd[1623]: time="2026-01-20T15:02:15.239802047Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 15:02:17.377094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 15:02:17.383025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:18.083952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:18.104199 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:02:18.400124 kubelet[2125]: E0120 15:02:18.399769 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:02:18.405563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:02:18.405999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:02:18.406909 systemd[1]: kubelet.service: Consumed 895ms CPU time, 110.9M memory peak. Jan 20 15:02:18.848712 containerd[1623]: time="2026-01-20T15:02:18.848125489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:18.850139 containerd[1623]: time="2026-01-20T15:02:18.850060702Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 15:02:18.851735 containerd[1623]: time="2026-01-20T15:02:18.851659215Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:18.855766 containerd[1623]: time="2026-01-20T15:02:18.855586394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:18.857231 containerd[1623]: time="2026-01-20T15:02:18.857076815Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.617232551s" Jan 20 15:02:18.857231 containerd[1623]: time="2026-01-20T15:02:18.857180866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 15:02:18.859696 containerd[1623]: time="2026-01-20T15:02:18.859576424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 15:02:21.713081 containerd[1623]: time="2026-01-20T15:02:21.712760844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:21.714099 containerd[1623]: time="2026-01-20T15:02:21.713989129Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 15:02:21.715842 containerd[1623]: time="2026-01-20T15:02:21.715729886Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:21.719775 containerd[1623]: time="2026-01-20T15:02:21.719508836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:21.720896 containerd[1623]: time="2026-01-20T15:02:21.720786398Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.861181077s" Jan 20 15:02:21.720896 containerd[1623]: time="2026-01-20T15:02:21.720842053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 15:02:21.723412 containerd[1623]: time="2026-01-20T15:02:21.723297790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 15:02:24.058963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626785360.mount: Deactivated successfully. Jan 20 15:02:25.614522 containerd[1623]: time="2026-01-20T15:02:25.614086317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:25.615943 containerd[1623]: time="2026-01-20T15:02:25.615911845Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 20 15:02:25.617783 containerd[1623]: time="2026-01-20T15:02:25.617568154Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:25.620393 containerd[1623]: time="2026-01-20T15:02:25.620277044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:25.621002 containerd[1623]: time="2026-01-20T15:02:25.620917142Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.897559918s" Jan 20 15:02:25.621002 containerd[1623]: time="2026-01-20T15:02:25.620971197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 15:02:25.624453 containerd[1623]: time="2026-01-20T15:02:25.624052459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 15:02:26.179007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613549731.mount: Deactivated successfully. Jan 20 15:02:27.041833 containerd[1623]: time="2026-01-20T15:02:27.041560822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:27.043396 containerd[1623]: time="2026-01-20T15:02:27.043280265Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=122783" Jan 20 15:02:27.045044 containerd[1623]: time="2026-01-20T15:02:27.044940238Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:27.049260 containerd[1623]: time="2026-01-20T15:02:27.049141636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:27.050761 containerd[1623]: time="2026-01-20T15:02:27.050673821Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.426504784s" Jan 20 15:02:27.050761 containerd[1623]: time="2026-01-20T15:02:27.050727854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 15:02:27.053090 containerd[1623]: time="2026-01-20T15:02:27.052959283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 15:02:27.450457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount993679125.mount: Deactivated successfully. Jan 20 15:02:27.461071 containerd[1623]: time="2026-01-20T15:02:27.460933199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 15:02:27.462432 containerd[1623]: time="2026-01-20T15:02:27.462342625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 20 15:02:27.464889 containerd[1623]: time="2026-01-20T15:02:27.464791538Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 15:02:27.468791 containerd[1623]: time="2026-01-20T15:02:27.468678620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 15:02:27.469396 containerd[1623]: time="2026-01-20T15:02:27.469277493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 416.249032ms" Jan 20 15:02:27.469396 containerd[1623]: time="2026-01-20T15:02:27.469329541Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 15:02:27.470841 containerd[1623]: time="2026-01-20T15:02:27.470785156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 15:02:28.007936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717453310.mount: Deactivated successfully. Jan 20 15:02:28.656384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 15:02:28.659016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:28.865426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:28.893088 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:02:28.953701 kubelet[2258]: E0120 15:02:28.952396 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:02:28.956846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:02:28.957497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:02:28.958852 systemd[1]: kubelet.service: Consumed 235ms CPU time, 110.8M memory peak. Jan 20 15:02:30.086084 containerd[1623]: time="2026-01-20T15:02:30.085908365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:30.087399 containerd[1623]: time="2026-01-20T15:02:30.087328600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 20 15:02:30.089006 containerd[1623]: time="2026-01-20T15:02:30.088904998Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:30.092106 containerd[1623]: time="2026-01-20T15:02:30.091958128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:30.092888 containerd[1623]: time="2026-01-20T15:02:30.092819940Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.622004854s" Jan 20 15:02:30.092888 containerd[1623]: time="2026-01-20T15:02:30.092867795Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 15:02:33.160585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:33.161087 systemd[1]: kubelet.service: Consumed 235ms CPU time, 110.8M memory peak. Jan 20 15:02:33.165085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:33.209287 systemd[1]: Reload requested from client PID 2303 ('systemctl') (unit session-6.scope)... Jan 20 15:02:33.209342 systemd[1]: Reloading... Jan 20 15:02:33.315721 zram_generator::config[2348]: No configuration found. Jan 20 15:02:33.568939 systemd[1]: Reloading finished in 359 ms. Jan 20 15:02:33.680287 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 15:02:33.680416 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 15:02:33.680894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:33.680963 systemd[1]: kubelet.service: Consumed 184ms CPU time, 98.5M memory peak. Jan 20 15:02:33.683032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:33.889844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:33.905999 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 15:02:33.973204 kubelet[2396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:02:33.973204 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 15:02:33.973204 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:02:33.973836 kubelet[2396]: I0120 15:02:33.973233 2396 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 15:02:34.339260 kubelet[2396]: I0120 15:02:34.339196 2396 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 15:02:34.339260 kubelet[2396]: I0120 15:02:34.339242 2396 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 15:02:34.339577 kubelet[2396]: I0120 15:02:34.339522 2396 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 15:02:34.369846 kubelet[2396]: E0120 15:02:34.369763 2396 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:34.371851 kubelet[2396]: I0120 15:02:34.371801 2396 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 15:02:34.383007 kubelet[2396]: I0120 15:02:34.382974 2396 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 15:02:34.392111 kubelet[2396]: I0120 15:02:34.391992 2396 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 15:02:34.394174 kubelet[2396]: I0120 15:02:34.394080 2396 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 15:02:34.394416 kubelet[2396]: I0120 15:02:34.394182 2396 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 15:02:34.394416 kubelet[2396]: I0120 15:02:34.394410 2396 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 15:02:34.394683 kubelet[2396]: I0120 15:02:34.394421 2396 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 15:02:34.394683 kubelet[2396]: I0120 15:02:34.394573 2396 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:02:34.399806 kubelet[2396]: I0120 15:02:34.399726 2396 kubelet.go:446] "Attempting to sync node with API server" Jan 20 15:02:34.399890 kubelet[2396]: I0120 15:02:34.399801 2396 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 15:02:34.399890 kubelet[2396]: I0120 15:02:34.399883 2396 kubelet.go:352] "Adding apiserver pod source" Jan 20 15:02:34.399943 kubelet[2396]: I0120 15:02:34.399896 2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 15:02:34.406308 kubelet[2396]: I0120 15:02:34.406216 2396 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 15:02:34.406824 kubelet[2396]: W0120 15:02:34.406785 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:34.406915 kubelet[2396]: E0120 15:02:34.406897 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:34.406957 kubelet[2396]: I0120 15:02:34.406937 2396 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 15:02:34.407142 kubelet[2396]: W0120 15:02:34.407000 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:34.407142 kubelet[2396]: E0120 15:02:34.407083 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:34.408515 kubelet[2396]: W0120 15:02:34.408458 2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 15:02:34.411488 kubelet[2396]: I0120 15:02:34.411420 2396 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 15:02:34.411562 kubelet[2396]: I0120 15:02:34.411509 2396 server.go:1287] "Started kubelet" Jan 20 15:02:34.411779 kubelet[2396]: I0120 15:02:34.411691 2396 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 15:02:34.413744 kubelet[2396]: I0120 15:02:34.413043 2396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 15:02:34.413744 kubelet[2396]: I0120 15:02:34.413358 2396 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 15:02:34.413917 kubelet[2396]: I0120 15:02:34.413866 2396 server.go:479] "Adding debug handlers to kubelet server" Jan 20 15:02:34.415258 kubelet[2396]: I0120 15:02:34.415204 2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 15:02:34.415573 kubelet[2396]: I0120 15:02:34.415492 2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 15:02:34.418024 kubelet[2396]: E0120 15:02:34.415538 2396 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 15:02:34.418024 kubelet[2396]: I0120 15:02:34.415723 2396 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 15:02:34.418024 kubelet[2396]: I0120 15:02:34.415926 2396 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 15:02:34.418024 kubelet[2396]: I0120 15:02:34.415996 2396 reconciler.go:26] "Reconciler: start to sync state" Jan 20 15:02:34.418024 kubelet[2396]: W0120 15:02:34.416455 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:34.418024 kubelet[2396]: E0120 15:02:34.416495 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:34.418024 kubelet[2396]: E0120 15:02:34.416838 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Jan 20 15:02:34.418972 kubelet[2396]: I0120 15:02:34.418896 2396 factory.go:221] Registration of the systemd container factory successfully Jan 20 15:02:34.419138 kubelet[2396]: I0120 15:02:34.419067 2396 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 15:02:34.419698 kubelet[2396]: E0120 15:02:34.418753 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c7899854c376c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 15:02:34.411456364 +0000 UTC m=+0.500701213,LastTimestamp:2026-01-20 15:02:34.411456364 +0000 UTC m=+0.500701213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 15:02:34.420102 kubelet[2396]: E0120 15:02:34.420068 2396 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 15:02:34.420810 kubelet[2396]: I0120 15:02:34.420722 2396 factory.go:221] Registration of the containerd container factory successfully Jan 20 15:02:34.445693 kubelet[2396]: I0120 15:02:34.444453 2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 15:02:34.448170 kubelet[2396]: I0120 15:02:34.448126 2396 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 15:02:34.448170 kubelet[2396]: I0120 15:02:34.448159 2396 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 15:02:34.448251 kubelet[2396]: I0120 15:02:34.448175 2396 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:02:34.449055 kubelet[2396]: I0120 15:02:34.449007 2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 15:02:34.449055 kubelet[2396]: I0120 15:02:34.449054 2396 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 15:02:34.449225 kubelet[2396]: I0120 15:02:34.449071 2396 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 15:02:34.449225 kubelet[2396]: I0120 15:02:34.449077 2396 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 15:02:34.450185 kubelet[2396]: E0120 15:02:34.449937 2396 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 15:02:34.450423 kubelet[2396]: W0120 15:02:34.450335 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:34.450467 kubelet[2396]: E0120 15:02:34.450439 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:34.654885 kubelet[2396]: E0120 15:02:34.653173 2396 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 15:02:34.654885 kubelet[2396]: I0120 15:02:34.653659 2396 policy_none.go:49] "None policy: Start" Jan 20 15:02:34.654885 kubelet[2396]: I0120 15:02:34.653681 2396 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 15:02:34.654885 kubelet[2396]: I0120 15:02:34.653696 2396 state_mem.go:35] "Initializing new in-memory state store" Jan 20 15:02:34.654885 kubelet[2396]: E0120 15:02:34.653891 2396 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 15:02:34.654885 kubelet[2396]: E0120 15:02:34.654078 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Jan 20 15:02:34.665577 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 15:02:34.691902 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 15:02:34.697233 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 15:02:34.714826 kubelet[2396]: I0120 15:02:34.713909 2396 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 15:02:34.714826 kubelet[2396]: I0120 15:02:34.714190 2396 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 15:02:34.714826 kubelet[2396]: I0120 15:02:34.714219 2396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 15:02:34.714826 kubelet[2396]: I0120 15:02:34.714756 2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 15:02:34.716439 kubelet[2396]: E0120 15:02:34.716373 2396 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 15:02:34.716439 kubelet[2396]: E0120 15:02:34.716431 2396 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 15:02:34.817295 kubelet[2396]: I0120 15:02:34.817220 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:02:34.817877 kubelet[2396]: E0120 15:02:34.817813 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 20 15:02:34.864993 systemd[1]: Created slice kubepods-burstable-pod0f47928f7765068149fdba8655e6da1a.slice - libcontainer container kubepods-burstable-pod0f47928f7765068149fdba8655e6da1a.slice. Jan 20 15:02:34.886813 kubelet[2396]: E0120 15:02:34.886738 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:34.890571 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 15:02:34.900936 kubelet[2396]: E0120 15:02:34.900818 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:34.904237 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 15:02:34.907025 kubelet[2396]: E0120 15:02:34.906870 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:34.955147 kubelet[2396]: I0120 15:02:34.955053 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f47928f7765068149fdba8655e6da1a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f47928f7765068149fdba8655e6da1a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:34.955147 kubelet[2396]: I0120 15:02:34.955116 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:34.955379 kubelet[2396]: I0120 15:02:34.955170 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:34.955379 kubelet[2396]: I0120 15:02:34.955191 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:34.955379 kubelet[2396]: I0120 15:02:34.955207 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f47928f7765068149fdba8655e6da1a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f47928f7765068149fdba8655e6da1a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:34.955379 kubelet[2396]: I0120 15:02:34.955221 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f47928f7765068149fdba8655e6da1a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f47928f7765068149fdba8655e6da1a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:34.955379 kubelet[2396]: I0120 15:02:34.955234 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:34.955487 kubelet[2396]: I0120 15:02:34.955248 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:34.955487 kubelet[2396]: I0120 15:02:34.955289 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:35.019283 kubelet[2396]: I0120 15:02:35.019186 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:02:35.019826 kubelet[2396]: E0120 15:02:35.019749 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 20 15:02:35.055337 kubelet[2396]: E0120 15:02:35.055290 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Jan 20 15:02:35.188115 kubelet[2396]: E0120 15:02:35.187929 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:35.188953 containerd[1623]: time="2026-01-20T15:02:35.188872135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f47928f7765068149fdba8655e6da1a,Namespace:kube-system,Attempt:0,}" Jan 20 15:02:35.203726 kubelet[2396]: E0120 15:02:35.203442 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:35.206404 containerd[1623]: time="2026-01-20T15:02:35.206271055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 15:02:35.215570 kubelet[2396]: E0120 15:02:35.215317 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:35.219233 containerd[1623]: time="2026-01-20T15:02:35.219088138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 15:02:35.434252 kubelet[2396]: W0120 15:02:35.433845 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:35.434252 kubelet[2396]: E0120 15:02:35.434118 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:35.439547 kubelet[2396]: I0120 15:02:35.439274 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:02:35.440734 kubelet[2396]: E0120 15:02:35.440422 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 20 15:02:35.464956 containerd[1623]: time="2026-01-20T15:02:35.464884339Z" level=info msg="connecting to shim cae395b1999e658b40999f730c0212badf51a640e3ec0cb2e8be9245bffe509d" address="unix:///run/containerd/s/3951ca3e9772537c083b0d4c51e01f78101225d7d18cd00066faafea4f0d7081" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:02:35.467311 containerd[1623]: time="2026-01-20T15:02:35.467230018Z" level=info msg="connecting to shim 42d23c874a625a3d450efc42c0bd54bce83b21310c2867e2f2d23a72f02c79a5" address="unix:///run/containerd/s/bd0080cb0ac9c7cb4856fb513c34ae715938b520e20e281afa9fea3d6917888c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:02:35.480697 containerd[1623]: time="2026-01-20T15:02:35.477998132Z" level=info msg="connecting to shim 948e4ce27768048f3bbb5b5ecd439a5ce36c448fef2054364f7e72cf36c8b9f0" address="unix:///run/containerd/s/4cf3ac352914f3ceca68c8a708f9803a7c6e145116274586cbb3cb6115ab6625" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:02:35.625039 kubelet[2396]: W0120 15:02:35.582253 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:35.625039 kubelet[2396]: E0120 15:02:35.582682 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:35.738280 kubelet[2396]: W0120 15:02:35.735943 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:35.738280 kubelet[2396]: E0120 15:02:35.736259 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:35.868173 kubelet[2396]: E0120 15:02:35.867940 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Jan 20 15:02:35.886575 systemd[1]: Started cri-containerd-948e4ce27768048f3bbb5b5ecd439a5ce36c448fef2054364f7e72cf36c8b9f0.scope - libcontainer container 948e4ce27768048f3bbb5b5ecd439a5ce36c448fef2054364f7e72cf36c8b9f0. Jan 20 15:02:35.926849 systemd[1]: Started cri-containerd-42d23c874a625a3d450efc42c0bd54bce83b21310c2867e2f2d23a72f02c79a5.scope - libcontainer container 42d23c874a625a3d450efc42c0bd54bce83b21310c2867e2f2d23a72f02c79a5. Jan 20 15:02:35.953680 kubelet[2396]: W0120 15:02:35.952106 2396 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 20 15:02:35.953680 kubelet[2396]: E0120 15:02:35.952495 2396 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:02:35.953050 systemd[1]: Started cri-containerd-cae395b1999e658b40999f730c0212badf51a640e3ec0cb2e8be9245bffe509d.scope - libcontainer container cae395b1999e658b40999f730c0212badf51a640e3ec0cb2e8be9245bffe509d. Jan 20 15:02:36.083206 containerd[1623]: time="2026-01-20T15:02:36.082947705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"42d23c874a625a3d450efc42c0bd54bce83b21310c2867e2f2d23a72f02c79a5\"" Jan 20 15:02:36.089245 kubelet[2396]: E0120 15:02:36.089221 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:36.097353 containerd[1623]: time="2026-01-20T15:02:36.097316707Z" level=info msg="CreateContainer within sandbox \"42d23c874a625a3d450efc42c0bd54bce83b21310c2867e2f2d23a72f02c79a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 15:02:36.128123 containerd[1623]: time="2026-01-20T15:02:36.128089600Z" level=info msg="Container 20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:36.131136 containerd[1623]: time="2026-01-20T15:02:36.130786073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f47928f7765068149fdba8655e6da1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cae395b1999e658b40999f730c0212badf51a640e3ec0cb2e8be9245bffe509d\"" Jan 20 15:02:36.132548 containerd[1623]: time="2026-01-20T15:02:36.132473129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"948e4ce27768048f3bbb5b5ecd439a5ce36c448fef2054364f7e72cf36c8b9f0\"" Jan 20 15:02:36.135176 kubelet[2396]: E0120 15:02:36.135143 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:36.135247 kubelet[2396]: E0120 15:02:36.135185 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:36.138421 containerd[1623]: time="2026-01-20T15:02:36.138397905Z" level=info msg="CreateContainer within sandbox \"948e4ce27768048f3bbb5b5ecd439a5ce36c448fef2054364f7e72cf36c8b9f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 15:02:36.138982 containerd[1623]: time="2026-01-20T15:02:36.138920322Z" level=info msg="CreateContainer within sandbox \"cae395b1999e658b40999f730c0212badf51a640e3ec0cb2e8be9245bffe509d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 15:02:36.143783 containerd[1623]: time="2026-01-20T15:02:36.143586007Z" level=info msg="CreateContainer within sandbox \"42d23c874a625a3d450efc42c0bd54bce83b21310c2867e2f2d23a72f02c79a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d\"" Jan 20 15:02:36.144709 containerd[1623]: time="2026-01-20T15:02:36.144404312Z" level=info msg="StartContainer for \"20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d\"" Jan 20 15:02:36.145595 containerd[1623]: time="2026-01-20T15:02:36.145550561Z" level=info msg="connecting to shim 20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d" address="unix:///run/containerd/s/bd0080cb0ac9c7cb4856fb513c34ae715938b520e20e281afa9fea3d6917888c" protocol=ttrpc version=3 Jan 20 15:02:36.153409 containerd[1623]: time="2026-01-20T15:02:36.153354892Z" level=info msg="Container c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:36.159590 containerd[1623]: time="2026-01-20T15:02:36.159537373Z" level=info msg="Container 5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:36.165502 containerd[1623]: time="2026-01-20T15:02:36.165420450Z" level=info msg="CreateContainer within sandbox \"948e4ce27768048f3bbb5b5ecd439a5ce36c448fef2054364f7e72cf36c8b9f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4\"" Jan 20 15:02:36.166339 containerd[1623]: time="2026-01-20T15:02:36.166260912Z" level=info msg="StartContainer for \"c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4\"" Jan 20 15:02:36.167876 containerd[1623]: time="2026-01-20T15:02:36.167741960Z" level=info msg="connecting to shim c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4" address="unix:///run/containerd/s/4cf3ac352914f3ceca68c8a708f9803a7c6e145116274586cbb3cb6115ab6625" protocol=ttrpc version=3 Jan 20 15:02:36.170417 containerd[1623]: time="2026-01-20T15:02:36.170316043Z" level=info msg="CreateContainer within sandbox \"cae395b1999e658b40999f730c0212badf51a640e3ec0cb2e8be9245bffe509d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a\"" Jan 20 15:02:36.170989 containerd[1623]: time="2026-01-20T15:02:36.170971062Z" level=info msg="StartContainer for \"5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a\"" Jan 20 15:02:36.172234 containerd[1623]: time="2026-01-20T15:02:36.172212884Z" level=info msg="connecting to shim 5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a" address="unix:///run/containerd/s/3951ca3e9772537c083b0d4c51e01f78101225d7d18cd00066faafea4f0d7081" protocol=ttrpc version=3 Jan 20 15:02:36.173127 systemd[1]: Started cri-containerd-20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d.scope - libcontainer container 20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d. Jan 20 15:02:36.232340 systemd[1]: Started cri-containerd-c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4.scope - libcontainer container c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4. Jan 20 15:02:36.239181 systemd[1]: Started cri-containerd-5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a.scope - libcontainer container 5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a. Jan 20 15:02:36.248281 kubelet[2396]: I0120 15:02:36.248252 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:02:36.249781 kubelet[2396]: E0120 15:02:36.249749 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 20 15:02:36.309914 containerd[1623]: time="2026-01-20T15:02:36.309796780Z" level=info msg="StartContainer for \"20be7fa288c29d13765e95898fee0f20d0d957a91ca7b58c23fa333cb863f36d\" returns successfully" Jan 20 15:02:36.358781 containerd[1623]: time="2026-01-20T15:02:36.357944314Z" level=info msg="StartContainer for \"c4f490874cfee84c47a3ad725d20bcd721651538a21620891f2054af2691afe4\" returns successfully" Jan 20 15:02:36.370570 containerd[1623]: time="2026-01-20T15:02:36.370520655Z" level=info msg="StartContainer for \"5f4967c6aea3b6e306c9cc000acbc950a433370ccf3ceee093a3d5d10594142a\" returns successfully" Jan 20 15:02:36.465659 kubelet[2396]: E0120 15:02:36.465543 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:36.466060 kubelet[2396]: E0120 15:02:36.466044 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:36.473833 kubelet[2396]: E0120 15:02:36.473802 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:36.474439 kubelet[2396]: E0120 15:02:36.474422 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:36.474858 kubelet[2396]: E0120 15:02:36.474183 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:36.474858 kubelet[2396]: E0120 15:02:36.474820 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:37.519568 kubelet[2396]: E0120 15:02:37.519463 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:37.520490 kubelet[2396]: E0120 15:02:37.520061 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:37.522349 kubelet[2396]: E0120 15:02:37.521750 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:37.522349 kubelet[2396]: E0120 15:02:37.521896 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:37.524182 kubelet[2396]: E0120 15:02:37.524132 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:37.536758 kubelet[2396]: E0120 15:02:37.536402 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:37.888360 kubelet[2396]: I0120 15:02:37.888029 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:02:38.765595 kubelet[2396]: E0120 15:02:38.765467 2396 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:02:38.766083 kubelet[2396]: E0120 15:02:38.765895 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:39.202195 update_engine[1602]: I20260120 15:02:39.202032 1602 update_attempter.cc:509] Updating boot flags... Jan 20 15:02:40.224509 kubelet[2396]: I0120 15:02:40.224121 2396 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 15:02:40.224509 kubelet[2396]: E0120 15:02:40.224156 2396 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 15:02:40.235023 kubelet[2396]: E0120 15:02:40.234721 2396 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c7899854c376c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 15:02:34.411456364 +0000 UTC m=+0.500701213,LastTimestamp:2026-01-20 15:02:34.411456364 +0000 UTC m=+0.500701213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 15:02:40.317702 kubelet[2396]: I0120 15:02:40.317440 2396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:40.335313 kubelet[2396]: E0120 15:02:40.334956 2396 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c789985cf7a5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 15:02:34.420058717 +0000 UTC m=+0.509303566,LastTimestamp:2026-01-20 15:02:34.420058717 +0000 UTC m=+0.509303566,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 15:02:40.336666 kubelet[2396]: E0120 15:02:40.336124 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 20 15:02:40.351451 kubelet[2396]: E0120 15:02:40.351009 2396 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:40.351451 kubelet[2396]: I0120 15:02:40.351051 2396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:40.357904 kubelet[2396]: E0120 15:02:40.357755 2396 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:40.357904 kubelet[2396]: I0120 15:02:40.357809 2396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:40.366135 kubelet[2396]: E0120 15:02:40.366108 2396 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:40.806381 kubelet[2396]: I0120 15:02:40.805693 2396 apiserver.go:52] "Watching apiserver" Jan 20 15:02:40.917164 kubelet[2396]: I0120 15:02:40.917099 2396 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 15:02:43.399799 systemd[1]: Reload requested from client PID 2692 ('systemctl') (unit session-6.scope)... Jan 20 15:02:43.399866 systemd[1]: Reloading... Jan 20 15:02:43.519737 zram_generator::config[2741]: No configuration found. Jan 20 15:02:43.734272 kubelet[2396]: I0120 15:02:43.734147 2396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:43.745671 kubelet[2396]: E0120 15:02:43.745392 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:43.855979 systemd[1]: Reloading finished in 455 ms. Jan 20 15:02:43.904271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:43.919469 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 15:02:43.920095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:43.920206 systemd[1]: kubelet.service: Consumed 2.237s CPU time, 131.5M memory peak. Jan 20 15:02:43.925035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:02:44.224240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:02:44.246453 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 15:02:44.367745 kubelet[2784]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:02:44.370797 kubelet[2784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 15:02:44.370797 kubelet[2784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:02:44.371047 kubelet[2784]: I0120 15:02:44.370913 2784 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 15:02:44.391834 kubelet[2784]: I0120 15:02:44.391712 2784 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 15:02:44.391834 kubelet[2784]: I0120 15:02:44.391755 2784 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 15:02:44.392196 kubelet[2784]: I0120 15:02:44.392176 2784 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 15:02:44.393981 kubelet[2784]: I0120 15:02:44.393945 2784 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 15:02:44.397670 kubelet[2784]: I0120 15:02:44.397535 2784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 15:02:44.412217 kubelet[2784]: I0120 15:02:44.412160 2784 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 15:02:44.422531 kubelet[2784]: I0120 15:02:44.422247 2784 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 15:02:44.422769 kubelet[2784]: I0120 15:02:44.422527 2784 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 15:02:44.422825 kubelet[2784]: I0120 15:02:44.422554 2784 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 15:02:44.422825 kubelet[2784]: I0120 15:02:44.422818 2784 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 15:02:44.422825 kubelet[2784]: I0120 15:02:44.422827 2784 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 15:02:44.423033 kubelet[2784]: I0120 15:02:44.422877 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:02:44.423174 kubelet[2784]: I0120 15:02:44.423117 2784 kubelet.go:446] "Attempting to sync node with API server" Jan 20 15:02:44.423174 kubelet[2784]: I0120 15:02:44.423164 2784 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 15:02:44.423229 kubelet[2784]: I0120 15:02:44.423186 2784 kubelet.go:352] "Adding apiserver pod source" Jan 20 15:02:44.423229 kubelet[2784]: I0120 15:02:44.423197 2784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 15:02:44.426283 kubelet[2784]: I0120 15:02:44.426217 2784 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 15:02:44.428595 kubelet[2784]: I0120 15:02:44.427680 2784 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 15:02:44.428595 kubelet[2784]: I0120 15:02:44.428378 2784 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 15:02:44.428595 kubelet[2784]: I0120 15:02:44.428414 2784 server.go:1287] "Started kubelet" Jan 20 15:02:44.432225 kubelet[2784]: I0120 15:02:44.432010 2784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 15:02:44.438960 kubelet[2784]: I0120 15:02:44.436996 2784 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 15:02:44.442690 kubelet[2784]: I0120 15:02:44.439454 2784 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 15:02:44.442690 kubelet[2784]: E0120 15:02:44.439794 2784 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 15:02:44.442690 kubelet[2784]: I0120 15:02:44.440095 2784 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 15:02:44.442690 kubelet[2784]: I0120 15:02:44.440276 2784 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 15:02:44.442690 kubelet[2784]: I0120 15:02:44.440320 2784 reconciler.go:26] "Reconciler: start to sync state" Jan 20 15:02:44.442690 kubelet[2784]: I0120 15:02:44.440787 2784 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 15:02:44.444269 kubelet[2784]: I0120 15:02:44.444251 2784 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 15:02:44.452543 kubelet[2784]: I0120 15:02:44.452368 2784 server.go:479] "Adding debug handlers to kubelet server" Jan 20 15:02:44.454254 kubelet[2784]: I0120 15:02:44.454194 2784 factory.go:221] Registration of the containerd container factory successfully Jan 20 15:02:44.454254 kubelet[2784]: I0120 15:02:44.454242 2784 factory.go:221] Registration of the systemd container factory successfully Jan 20 15:02:44.454336 kubelet[2784]: I0120 15:02:44.454314 2784 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 15:02:44.467883 kubelet[2784]: I0120 15:02:44.467469 2784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 15:02:44.470081 kubelet[2784]: I0120 15:02:44.470062 2784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 15:02:44.470176 kubelet[2784]: I0120 15:02:44.470166 2784 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 15:02:44.470258 kubelet[2784]: I0120 15:02:44.470247 2784 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 15:02:44.470298 kubelet[2784]: I0120 15:02:44.470291 2784 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 15:02:44.470385 kubelet[2784]: E0120 15:02:44.470369 2784 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.547781 2784 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.547806 2784 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.547839 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.548101 2784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.548120 2784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.548148 2784 policy_none.go:49] "None policy: Start" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.548163 2784 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.548179 2784 state_mem.go:35] "Initializing new in-memory state store" Jan 20 15:02:44.548383 kubelet[2784]: I0120 15:02:44.548325 2784 state_mem.go:75] "Updated machine memory state" Jan 20 15:02:44.555892 kubelet[2784]: I0120 15:02:44.555828 2784 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 15:02:44.556496 kubelet[2784]: I0120 15:02:44.556038 2784 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 15:02:44.556496 kubelet[2784]: I0120 15:02:44.556053 2784 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 15:02:44.556496 kubelet[2784]: I0120 15:02:44.556279 2784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 15:02:44.559201 kubelet[2784]: E0120 15:02:44.559171 2784 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 15:02:44.571357 kubelet[2784]: I0120 15:02:44.571195 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:44.572476 kubelet[2784]: I0120 15:02:44.572299 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:44.573566 kubelet[2784]: I0120 15:02:44.573430 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:44.599786 kubelet[2784]: E0120 15:02:44.599731 2784 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:44.702295 kubelet[2784]: I0120 15:02:44.702229 2784 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:02:44.716860 kubelet[2784]: I0120 15:02:44.716682 2784 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 15:02:44.716860 kubelet[2784]: I0120 15:02:44.716821 2784 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 15:02:44.742592 kubelet[2784]: I0120 15:02:44.742428 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:44.742592 kubelet[2784]: I0120 15:02:44.742495 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f47928f7765068149fdba8655e6da1a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f47928f7765068149fdba8655e6da1a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:44.742592 kubelet[2784]: I0120 15:02:44.742518 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:44.742592 kubelet[2784]: I0120 15:02:44.742534 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:44.742592 kubelet[2784]: I0120 15:02:44.742547 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:44.743023 kubelet[2784]: I0120 15:02:44.742561 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:44.743023 kubelet[2784]: I0120 15:02:44.742573 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f47928f7765068149fdba8655e6da1a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f47928f7765068149fdba8655e6da1a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:44.743023 kubelet[2784]: I0120 15:02:44.742587 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f47928f7765068149fdba8655e6da1a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f47928f7765068149fdba8655e6da1a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:02:44.743023 kubelet[2784]: I0120 15:02:44.742664 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:02:44.895584 kubelet[2784]: E0120 15:02:44.895065 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:44.900376 kubelet[2784]: E0120 15:02:44.900333 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:44.901974 kubelet[2784]: E0120 15:02:44.901914 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:45.424999 kubelet[2784]: I0120 15:02:45.424921 2784 apiserver.go:52] "Watching apiserver" Jan 20 15:02:45.440380 kubelet[2784]: I0120 15:02:45.440322 2784 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 15:02:45.493405 kubelet[2784]: I0120 15:02:45.491477 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.491457009 podStartE2EDuration="1.491457009s" podCreationTimestamp="2026-01-20 15:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:45.487106825 +0000 UTC m=+1.214930253" watchObservedRunningTime="2026-01-20 15:02:45.491457009 +0000 UTC m=+1.219280406" Jan 20 15:02:45.508181 kubelet[2784]: E0120 15:02:45.508083 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:45.508790 kubelet[2784]: I0120 15:02:45.508594 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:45.509499 kubelet[2784]: E0120 15:02:45.509469 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:45.527919 kubelet[2784]: I0120 15:02:45.527705 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5276882609999998 podStartE2EDuration="2.527688261s" podCreationTimestamp="2026-01-20 15:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:45.506086773 +0000 UTC m=+1.233910180" watchObservedRunningTime="2026-01-20 15:02:45.527688261 +0000 UTC m=+1.255511658" Jan 20 15:02:45.528166 kubelet[2784]: E0120 15:02:45.527957 2784 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 15:02:45.528166 kubelet[2784]: E0120 15:02:45.528079 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:45.540958 kubelet[2784]: I0120 15:02:45.540811 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.540799669 podStartE2EDuration="1.540799669s" podCreationTimestamp="2026-01-20 15:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:45.52836156 +0000 UTC m=+1.256184967" watchObservedRunningTime="2026-01-20 15:02:45.540799669 +0000 UTC m=+1.268623066" Jan 20 15:02:45.636556 sudo[1796]: pam_unix(sudo:session): session closed for user root Jan 20 15:02:45.639142 sshd[1795]: Connection closed by 10.0.0.1 port 58828 Jan 20 15:02:45.640243 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Jan 20 15:02:45.646528 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:58828.service: Deactivated successfully. Jan 20 15:02:45.649428 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 15:02:45.649897 systemd[1]: session-6.scope: Consumed 6.613s CPU time, 191.3M memory peak. Jan 20 15:02:45.651587 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Jan 20 15:02:45.653738 systemd-logind[1598]: Removed session 6. Jan 20 15:02:46.510404 kubelet[2784]: E0120 15:02:46.510309 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:46.510982 kubelet[2784]: E0120 15:02:46.510805 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:47.514024 kubelet[2784]: E0120 15:02:47.513942 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:47.692254 kubelet[2784]: I0120 15:02:47.692095 2784 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 15:02:47.692945 containerd[1623]: time="2026-01-20T15:02:47.692864129Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 15:02:47.694516 kubelet[2784]: I0120 15:02:47.693710 2784 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 15:02:48.041914 kubelet[2784]: E0120 15:02:48.041774 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:48.320926 systemd[1]: Created slice kubepods-besteffort-pod466f8272_b311_4a47_a1ca_fcee2e4cae6f.slice - libcontainer container kubepods-besteffort-pod466f8272_b311_4a47_a1ca_fcee2e4cae6f.slice. Jan 20 15:02:48.340922 systemd[1]: Created slice kubepods-burstable-pod8f17f7af_c4fd_4f14_bffc_0f3cd4d8941c.slice - libcontainer container kubepods-burstable-pod8f17f7af_c4fd_4f14_bffc_0f3cd4d8941c.slice. Jan 20 15:02:48.377319 kubelet[2784]: I0120 15:02:48.377170 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/466f8272-b311-4a47-a1ca-fcee2e4cae6f-lib-modules\") pod \"kube-proxy-l79hz\" (UID: \"466f8272-b311-4a47-a1ca-fcee2e4cae6f\") " pod="kube-system/kube-proxy-l79hz" Jan 20 15:02:48.377319 kubelet[2784]: I0120 15:02:48.377214 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-cni\") pod \"kube-flannel-ds-tscxk\" (UID: \"8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c\") " pod="kube-flannel/kube-flannel-ds-tscxk" Jan 20 15:02:48.377319 kubelet[2784]: I0120 15:02:48.377232 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-xtables-lock\") pod \"kube-flannel-ds-tscxk\" (UID: \"8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c\") " pod="kube-flannel/kube-flannel-ds-tscxk" Jan 20 15:02:48.377319 kubelet[2784]: I0120 15:02:48.377247 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/466f8272-b311-4a47-a1ca-fcee2e4cae6f-kube-proxy\") pod \"kube-proxy-l79hz\" (UID: \"466f8272-b311-4a47-a1ca-fcee2e4cae6f\") " pod="kube-system/kube-proxy-l79hz" Jan 20 15:02:48.377319 kubelet[2784]: I0120 15:02:48.377261 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdjj\" (UniqueName: \"kubernetes.io/projected/466f8272-b311-4a47-a1ca-fcee2e4cae6f-kube-api-access-grdjj\") pod \"kube-proxy-l79hz\" (UID: \"466f8272-b311-4a47-a1ca-fcee2e4cae6f\") " pod="kube-system/kube-proxy-l79hz" Jan 20 15:02:48.377743 kubelet[2784]: I0120 15:02:48.377313 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k97h\" (UniqueName: \"kubernetes.io/projected/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-kube-api-access-6k97h\") pod \"kube-flannel-ds-tscxk\" (UID: \"8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c\") " pod="kube-flannel/kube-flannel-ds-tscxk" Jan 20 15:02:48.377743 kubelet[2784]: I0120 15:02:48.377366 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/466f8272-b311-4a47-a1ca-fcee2e4cae6f-xtables-lock\") pod \"kube-proxy-l79hz\" (UID: \"466f8272-b311-4a47-a1ca-fcee2e4cae6f\") " pod="kube-system/kube-proxy-l79hz" Jan 20 15:02:48.377743 kubelet[2784]: I0120 15:02:48.377420 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-run\") pod \"kube-flannel-ds-tscxk\" (UID: \"8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c\") " pod="kube-flannel/kube-flannel-ds-tscxk" Jan 20 15:02:48.377743 kubelet[2784]: I0120 15:02:48.377445 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-flannel-cfg\") pod \"kube-flannel-ds-tscxk\" (UID: \"8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c\") " pod="kube-flannel/kube-flannel-ds-tscxk" Jan 20 15:02:48.377743 kubelet[2784]: I0120 15:02:48.377479 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-cni-plugin\") pod \"kube-flannel-ds-tscxk\" (UID: \"8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c\") " pod="kube-flannel/kube-flannel-ds-tscxk" Jan 20 15:02:48.488735 kubelet[2784]: E0120 15:02:48.488678 2784 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 15:02:48.488735 kubelet[2784]: E0120 15:02:48.488731 2784 projected.go:194] Error preparing data for projected volume kube-api-access-6k97h for pod kube-flannel/kube-flannel-ds-tscxk: configmap "kube-root-ca.crt" not found Jan 20 15:02:48.488917 kubelet[2784]: E0120 15:02:48.488778 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-kube-api-access-6k97h podName:8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c nodeName:}" failed. No retries permitted until 2026-01-20 15:02:48.988760865 +0000 UTC m=+4.716584263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6k97h" (UniqueName: "kubernetes.io/projected/8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c-kube-api-access-6k97h") pod "kube-flannel-ds-tscxk" (UID: "8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c") : configmap "kube-root-ca.crt" not found Jan 20 15:02:48.516719 kubelet[2784]: E0120 15:02:48.516529 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:48.517261 kubelet[2784]: E0120 15:02:48.516581 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:48.632584 kubelet[2784]: E0120 15:02:48.632416 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:48.634067 containerd[1623]: time="2026-01-20T15:02:48.634027376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l79hz,Uid:466f8272-b311-4a47-a1ca-fcee2e4cae6f,Namespace:kube-system,Attempt:0,}" Jan 20 15:02:48.705430 containerd[1623]: time="2026-01-20T15:02:48.705203416Z" level=info msg="connecting to shim 43954e196e48a2977f78f68efd62c48ae402450d6e2e78383be08c1b2f1cdb72" address="unix:///run/containerd/s/277a1666683bd9cb70468b1dc29d0b2322b51d96db82ec142765a4ff59bf084e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:02:48.764897 systemd[1]: Started cri-containerd-43954e196e48a2977f78f68efd62c48ae402450d6e2e78383be08c1b2f1cdb72.scope - libcontainer container 43954e196e48a2977f78f68efd62c48ae402450d6e2e78383be08c1b2f1cdb72. Jan 20 15:02:48.813002 containerd[1623]: time="2026-01-20T15:02:48.812918010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l79hz,Uid:466f8272-b311-4a47-a1ca-fcee2e4cae6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"43954e196e48a2977f78f68efd62c48ae402450d6e2e78383be08c1b2f1cdb72\"" Jan 20 15:02:48.814061 kubelet[2784]: E0120 15:02:48.814026 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:48.818993 containerd[1623]: time="2026-01-20T15:02:48.818805399Z" level=info msg="CreateContainer within sandbox \"43954e196e48a2977f78f68efd62c48ae402450d6e2e78383be08c1b2f1cdb72\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 15:02:48.838663 containerd[1623]: time="2026-01-20T15:02:48.838556381Z" level=info msg="Container 46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:48.850923 containerd[1623]: time="2026-01-20T15:02:48.850831005Z" level=info msg="CreateContainer within sandbox \"43954e196e48a2977f78f68efd62c48ae402450d6e2e78383be08c1b2f1cdb72\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c\"" Jan 20 15:02:48.851935 containerd[1623]: time="2026-01-20T15:02:48.851859742Z" level=info msg="StartContainer for \"46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c\"" Jan 20 15:02:48.854201 containerd[1623]: time="2026-01-20T15:02:48.854141920Z" level=info msg="connecting to shim 46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c" address="unix:///run/containerd/s/277a1666683bd9cb70468b1dc29d0b2322b51d96db82ec142765a4ff59bf084e" protocol=ttrpc version=3 Jan 20 15:02:48.891868 systemd[1]: Started cri-containerd-46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c.scope - libcontainer container 46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c. Jan 20 15:02:49.030818 containerd[1623]: time="2026-01-20T15:02:49.030746103Z" level=info msg="StartContainer for \"46a76cbb4b5fc9bea41fb11b38f7583bf01df7f2c6f28c5fef32aa2466f96a2c\" returns successfully" Jan 20 15:02:49.246322 kubelet[2784]: E0120 15:02:49.245534 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:49.247982 containerd[1623]: time="2026-01-20T15:02:49.247913782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tscxk,Uid:8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c,Namespace:kube-flannel,Attempt:0,}" Jan 20 15:02:49.282702 containerd[1623]: time="2026-01-20T15:02:49.282562534Z" level=info msg="connecting to shim 054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c" address="unix:///run/containerd/s/66da77d3a7cded9791de3554b6afd0e65252de343ff316f72b628eee11949f3d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:02:49.329223 systemd[1]: Started cri-containerd-054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c.scope - libcontainer container 054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c. Jan 20 15:02:49.395882 containerd[1623]: time="2026-01-20T15:02:49.395804143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tscxk,Uid:8f17f7af-c4fd-4f14-bffc-0f3cd4d8941c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\"" Jan 20 15:02:49.397021 kubelet[2784]: E0120 15:02:49.396924 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:49.398849 containerd[1623]: time="2026-01-20T15:02:49.398715439Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 20 15:02:49.522234 kubelet[2784]: E0120 15:02:49.522061 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:49.537080 kubelet[2784]: I0120 15:02:49.536823 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l79hz" podStartSLOduration=1.536807866 podStartE2EDuration="1.536807866s" podCreationTimestamp="2026-01-20 15:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:49.536766803 +0000 UTC m=+5.264590200" watchObservedRunningTime="2026-01-20 15:02:49.536807866 +0000 UTC m=+5.264631263" Jan 20 15:02:50.793266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811958968.mount: Deactivated successfully. Jan 20 15:02:50.842675 containerd[1623]: time="2026-01-20T15:02:50.842563501Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:50.843858 containerd[1623]: time="2026-01-20T15:02:50.843525019Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Jan 20 15:02:50.845430 containerd[1623]: time="2026-01-20T15:02:50.845316100Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:50.848694 containerd[1623]: time="2026-01-20T15:02:50.848544825Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:50.850005 containerd[1623]: time="2026-01-20T15:02:50.849783194Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.451006551s" Jan 20 15:02:50.850005 containerd[1623]: time="2026-01-20T15:02:50.849855949Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 20 15:02:50.853715 containerd[1623]: time="2026-01-20T15:02:50.853263522Z" level=info msg="CreateContainer within sandbox \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 15:02:50.865035 containerd[1623]: time="2026-01-20T15:02:50.864922878Z" level=info msg="Container ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:50.872808 containerd[1623]: time="2026-01-20T15:02:50.872592420Z" level=info msg="CreateContainer within sandbox \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc\"" Jan 20 15:02:50.873849 containerd[1623]: time="2026-01-20T15:02:50.873810675Z" level=info msg="StartContainer for \"ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc\"" Jan 20 15:02:50.875482 containerd[1623]: time="2026-01-20T15:02:50.875307338Z" level=info msg="connecting to shim ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc" address="unix:///run/containerd/s/66da77d3a7cded9791de3554b6afd0e65252de343ff316f72b628eee11949f3d" protocol=ttrpc version=3 Jan 20 15:02:50.903990 systemd[1]: Started cri-containerd-ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc.scope - libcontainer container ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc. Jan 20 15:02:50.954988 systemd[1]: cri-containerd-ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc.scope: Deactivated successfully. Jan 20 15:02:50.956897 containerd[1623]: time="2026-01-20T15:02:50.956844361Z" level=info msg="StartContainer for \"ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc\" returns successfully" Jan 20 15:02:50.960064 containerd[1623]: time="2026-01-20T15:02:50.959923004Z" level=info msg="received container exit event container_id:\"ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc\" id:\"ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc\" pid:3128 exited_at:{seconds:1768921370 nanos:958779614}" Jan 20 15:02:51.531370 kubelet[2784]: E0120 15:02:51.531228 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:51.533882 containerd[1623]: time="2026-01-20T15:02:51.533582899Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 20 15:02:51.690568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6252106f9d65a74c971b9ee0d08fb1dff24ce1711dc4252a48b36932c875cc-rootfs.mount: Deactivated successfully. Jan 20 15:02:52.394364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780531682.mount: Deactivated successfully. Jan 20 15:02:52.779781 kubelet[2784]: E0120 15:02:52.779736 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:53.494318 containerd[1623]: time="2026-01-20T15:02:53.493897793Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:53.497809 containerd[1623]: time="2026-01-20T15:02:53.495937678Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=17319135" Jan 20 15:02:53.500371 containerd[1623]: time="2026-01-20T15:02:53.500312809Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:53.504077 containerd[1623]: time="2026-01-20T15:02:53.503897519Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:02:53.506057 containerd[1623]: time="2026-01-20T15:02:53.505871970Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 1.97198202s" Jan 20 15:02:53.506057 containerd[1623]: time="2026-01-20T15:02:53.506042359Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 20 15:02:53.510526 containerd[1623]: time="2026-01-20T15:02:53.510411050Z" level=info msg="CreateContainer within sandbox \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 15:02:53.530964 containerd[1623]: time="2026-01-20T15:02:53.530586493Z" level=info msg="Container 22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:53.538866 kubelet[2784]: E0120 15:02:53.538755 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:53.550580 containerd[1623]: time="2026-01-20T15:02:53.550377427Z" level=info msg="CreateContainer within sandbox \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461\"" Jan 20 15:02:53.553674 containerd[1623]: time="2026-01-20T15:02:53.552736842Z" level=info msg="StartContainer for \"22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461\"" Jan 20 15:02:53.554687 containerd[1623]: time="2026-01-20T15:02:53.554557616Z" level=info msg="connecting to shim 22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461" address="unix:///run/containerd/s/66da77d3a7cded9791de3554b6afd0e65252de343ff316f72b628eee11949f3d" protocol=ttrpc version=3 Jan 20 15:02:53.624715 systemd[1]: Started cri-containerd-22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461.scope - libcontainer container 22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461. Jan 20 15:02:53.723200 systemd[1]: cri-containerd-22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461.scope: Deactivated successfully. Jan 20 15:02:53.725919 containerd[1623]: time="2026-01-20T15:02:53.725806214Z" level=info msg="received container exit event container_id:\"22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461\" id:\"22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461\" pid:3200 exited_at:{seconds:1768921373 nanos:723596010}" Jan 20 15:02:53.729217 containerd[1623]: time="2026-01-20T15:02:53.729160556Z" level=info msg="StartContainer for \"22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461\" returns successfully" Jan 20 15:02:53.761504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22bb205429c2c90db4b65716b5da95372cf2931f00211caf55e8efdcf335b461-rootfs.mount: Deactivated successfully. Jan 20 15:02:53.804186 kubelet[2784]: I0120 15:02:53.804144 2784 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 15:02:53.845816 systemd[1]: Created slice kubepods-burstable-pod80a6c406_48a7_4e46_afd9_1430b3024d5a.slice - libcontainer container kubepods-burstable-pod80a6c406_48a7_4e46_afd9_1430b3024d5a.slice. Jan 20 15:02:53.857381 systemd[1]: Created slice kubepods-burstable-pod3f921a47_d0d0_4df9_8764_f71de2db1834.slice - libcontainer container kubepods-burstable-pod3f921a47_d0d0_4df9_8764_f71de2db1834.slice. Jan 20 15:02:53.926721 kubelet[2784]: I0120 15:02:53.926530 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clsjx\" (UniqueName: \"kubernetes.io/projected/80a6c406-48a7-4e46-afd9-1430b3024d5a-kube-api-access-clsjx\") pod \"coredns-668d6bf9bc-vrqb9\" (UID: \"80a6c406-48a7-4e46-afd9-1430b3024d5a\") " pod="kube-system/coredns-668d6bf9bc-vrqb9" Jan 20 15:02:53.926721 kubelet[2784]: I0120 15:02:53.926597 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl97b\" (UniqueName: \"kubernetes.io/projected/3f921a47-d0d0-4df9-8764-f71de2db1834-kube-api-access-nl97b\") pod \"coredns-668d6bf9bc-cmcg4\" (UID: \"3f921a47-d0d0-4df9-8764-f71de2db1834\") " pod="kube-system/coredns-668d6bf9bc-cmcg4" Jan 20 15:02:53.926721 kubelet[2784]: I0120 15:02:53.926677 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f921a47-d0d0-4df9-8764-f71de2db1834-config-volume\") pod \"coredns-668d6bf9bc-cmcg4\" (UID: \"3f921a47-d0d0-4df9-8764-f71de2db1834\") " pod="kube-system/coredns-668d6bf9bc-cmcg4" Jan 20 15:02:53.926721 kubelet[2784]: I0120 15:02:53.926701 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80a6c406-48a7-4e46-afd9-1430b3024d5a-config-volume\") pod \"coredns-668d6bf9bc-vrqb9\" (UID: \"80a6c406-48a7-4e46-afd9-1430b3024d5a\") " pod="kube-system/coredns-668d6bf9bc-vrqb9" Jan 20 15:02:54.151500 kubelet[2784]: E0120 15:02:54.151323 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:54.153655 containerd[1623]: time="2026-01-20T15:02:54.153501476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrqb9,Uid:80a6c406-48a7-4e46-afd9-1430b3024d5a,Namespace:kube-system,Attempt:0,}" Jan 20 15:02:54.161547 kubelet[2784]: E0120 15:02:54.161325 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:54.162038 containerd[1623]: time="2026-01-20T15:02:54.161885516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmcg4,Uid:3f921a47-d0d0-4df9-8764-f71de2db1834,Namespace:kube-system,Attempt:0,}" Jan 20 15:02:54.199376 containerd[1623]: time="2026-01-20T15:02:54.199309124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmcg4,Uid:3f921a47-d0d0-4df9-8764-f71de2db1834,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92418365d54508536fe73f748b53f322ffc60449f4a63d1119755c1f6c5a907e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 15:02:54.199722 kubelet[2784]: E0120 15:02:54.199670 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92418365d54508536fe73f748b53f322ffc60449f4a63d1119755c1f6c5a907e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 15:02:54.199797 kubelet[2784]: E0120 15:02:54.199750 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92418365d54508536fe73f748b53f322ffc60449f4a63d1119755c1f6c5a907e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-cmcg4" Jan 20 15:02:54.199797 kubelet[2784]: E0120 15:02:54.199779 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92418365d54508536fe73f748b53f322ffc60449f4a63d1119755c1f6c5a907e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-cmcg4" Jan 20 15:02:54.199868 kubelet[2784]: E0120 15:02:54.199815 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cmcg4_kube-system(3f921a47-d0d0-4df9-8764-f71de2db1834)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cmcg4_kube-system(3f921a47-d0d0-4df9-8764-f71de2db1834)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92418365d54508536fe73f748b53f322ffc60449f4a63d1119755c1f6c5a907e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-cmcg4" podUID="3f921a47-d0d0-4df9-8764-f71de2db1834" Jan 20 15:02:54.201815 containerd[1623]: time="2026-01-20T15:02:54.201732714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrqb9,Uid:80a6c406-48a7-4e46-afd9-1430b3024d5a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b82d04c6c93ce6e98c0b798645cc3ad434b78c259feb9066fe1c3f8a1c03695\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 15:02:54.202032 kubelet[2784]: E0120 15:02:54.201993 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b82d04c6c93ce6e98c0b798645cc3ad434b78c259feb9066fe1c3f8a1c03695\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 15:02:54.202121 kubelet[2784]: E0120 15:02:54.202044 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b82d04c6c93ce6e98c0b798645cc3ad434b78c259feb9066fe1c3f8a1c03695\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-vrqb9" Jan 20 15:02:54.202121 kubelet[2784]: E0120 15:02:54.202060 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b82d04c6c93ce6e98c0b798645cc3ad434b78c259feb9066fe1c3f8a1c03695\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-vrqb9" Jan 20 15:02:54.202204 kubelet[2784]: E0120 15:02:54.202158 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vrqb9_kube-system(80a6c406-48a7-4e46-afd9-1430b3024d5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vrqb9_kube-system(80a6c406-48a7-4e46-afd9-1430b3024d5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b82d04c6c93ce6e98c0b798645cc3ad434b78c259feb9066fe1c3f8a1c03695\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-vrqb9" podUID="80a6c406-48a7-4e46-afd9-1430b3024d5a" Jan 20 15:02:54.543147 kubelet[2784]: E0120 15:02:54.543009 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:54.547058 containerd[1623]: time="2026-01-20T15:02:54.546973009Z" level=info msg="CreateContainer within sandbox \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 15:02:54.564246 containerd[1623]: time="2026-01-20T15:02:54.564211403Z" level=info msg="Container b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:02:54.580568 containerd[1623]: time="2026-01-20T15:02:54.580416356Z" level=info msg="CreateContainer within sandbox \"054a6e7b6139c770f459fe4cdf99977f97733cee34be811a509140a907b1585c\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4\"" Jan 20 15:02:54.582304 containerd[1623]: time="2026-01-20T15:02:54.581904227Z" level=info msg="StartContainer for \"b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4\"" Jan 20 15:02:54.583237 containerd[1623]: time="2026-01-20T15:02:54.583175736Z" level=info msg="connecting to shim b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4" address="unix:///run/containerd/s/66da77d3a7cded9791de3554b6afd0e65252de343ff316f72b628eee11949f3d" protocol=ttrpc version=3 Jan 20 15:02:54.622015 systemd[1]: Started cri-containerd-b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4.scope - libcontainer container b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4. Jan 20 15:02:54.672171 containerd[1623]: time="2026-01-20T15:02:54.672098161Z" level=info msg="StartContainer for \"b3f577c0ebf6c54c59be504edfcfc92c6369900a475d6d4912acfece08eb4af4\" returns successfully" Jan 20 15:02:55.551218 kubelet[2784]: E0120 15:02:55.551144 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:55.741563 systemd-networkd[1315]: flannel.1: Link UP Jan 20 15:02:55.741645 systemd-networkd[1315]: flannel.1: Gained carrier Jan 20 15:02:56.552676 kubelet[2784]: E0120 15:02:56.552568 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:02:56.795934 systemd-networkd[1315]: flannel.1: Gained IPv6LL Jan 20 15:03:06.472039 kubelet[2784]: E0120 15:03:06.471933 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:06.472920 containerd[1623]: time="2026-01-20T15:03:06.472400821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrqb9,Uid:80a6c406-48a7-4e46-afd9-1430b3024d5a,Namespace:kube-system,Attempt:0,}" Jan 20 15:03:06.520273 systemd-networkd[1315]: cni0: Link UP Jan 20 15:03:06.520930 systemd-networkd[1315]: cni0: Gained carrier Jan 20 15:03:06.526812 systemd-networkd[1315]: cni0: Lost carrier Jan 20 15:03:06.537809 systemd-networkd[1315]: veth571fbb1f: Link UP Jan 20 15:03:06.548403 kernel: cni0: port 1(veth571fbb1f) entered blocking state Jan 20 15:03:06.548823 kernel: cni0: port 1(veth571fbb1f) entered disabled state Jan 20 15:03:06.549054 kernel: veth571fbb1f: entered allmulticast mode Jan 20 15:03:06.557695 kernel: veth571fbb1f: entered promiscuous mode Jan 20 15:03:06.572307 kernel: cni0: port 1(veth571fbb1f) entered blocking state Jan 20 15:03:06.572426 kernel: cni0: port 1(veth571fbb1f) entered forwarding state Jan 20 15:03:06.572548 systemd-networkd[1315]: veth571fbb1f: Gained carrier Jan 20 15:03:06.573703 systemd-networkd[1315]: cni0: Gained carrier Jan 20 15:03:06.577854 containerd[1623]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Jan 20 15:03:06.577854 containerd[1623]: delegateAdd: netconf sent to delegate plugin: Jan 20 15:03:06.645024 containerd[1623]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T15:03:06.644894363Z" level=info msg="connecting to shim 7c37deab523835261488194b46fd8b41bdebdf1420b0b486e538337d6c710700" address="unix:///run/containerd/s/059d22186263c76586d996b49869b4ff4535ed2d7a3dc2122142cc6beb4c8a20" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:03:06.725046 systemd[1]: Started cri-containerd-7c37deab523835261488194b46fd8b41bdebdf1420b0b486e538337d6c710700.scope - libcontainer container 7c37deab523835261488194b46fd8b41bdebdf1420b0b486e538337d6c710700. Jan 20 15:03:06.748370 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:03:06.802939 containerd[1623]: time="2026-01-20T15:03:06.802767888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrqb9,Uid:80a6c406-48a7-4e46-afd9-1430b3024d5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c37deab523835261488194b46fd8b41bdebdf1420b0b486e538337d6c710700\"" Jan 20 15:03:06.804031 kubelet[2784]: E0120 15:03:06.803963 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:06.806759 containerd[1623]: time="2026-01-20T15:03:06.806706256Z" level=info msg="CreateContainer within sandbox \"7c37deab523835261488194b46fd8b41bdebdf1420b0b486e538337d6c710700\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 15:03:06.825020 containerd[1623]: time="2026-01-20T15:03:06.824917467Z" level=info msg="Container 9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:03:06.835004 containerd[1623]: time="2026-01-20T15:03:06.834908503Z" level=info msg="CreateContainer within sandbox \"7c37deab523835261488194b46fd8b41bdebdf1420b0b486e538337d6c710700\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa\"" Jan 20 15:03:06.836032 containerd[1623]: time="2026-01-20T15:03:06.835975438Z" level=info msg="StartContainer for \"9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa\"" Jan 20 15:03:06.837724 containerd[1623]: time="2026-01-20T15:03:06.837647944Z" level=info msg="connecting to shim 9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa" address="unix:///run/containerd/s/059d22186263c76586d996b49869b4ff4535ed2d7a3dc2122142cc6beb4c8a20" protocol=ttrpc version=3 Jan 20 15:03:06.871958 systemd[1]: Started cri-containerd-9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa.scope - libcontainer container 9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa. Jan 20 15:03:06.932100 containerd[1623]: time="2026-01-20T15:03:06.931928808Z" level=info msg="StartContainer for \"9e4f4b945d72b3804e1c4ce5c36530a22424191a6f1d8cc36aa5b3d692988afa\" returns successfully" Jan 20 15:03:07.605782 kubelet[2784]: E0120 15:03:07.605708 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:07.624007 kubelet[2784]: I0120 15:03:07.623818 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tscxk" podStartSLOduration=15.513827727 podStartE2EDuration="19.623801539s" podCreationTimestamp="2026-01-20 15:02:48 +0000 UTC" firstStartedPulling="2026-01-20 15:02:49.398211786 +0000 UTC m=+5.126035183" lastFinishedPulling="2026-01-20 15:02:53.508185598 +0000 UTC m=+9.236008995" observedRunningTime="2026-01-20 15:02:55.565933709 +0000 UTC m=+11.293757105" watchObservedRunningTime="2026-01-20 15:03:07.623801539 +0000 UTC m=+23.351624935" Jan 20 15:03:07.624007 kubelet[2784]: I0120 15:03:07.623991 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vrqb9" podStartSLOduration=19.623982851 podStartE2EDuration="19.623982851s" podCreationTimestamp="2026-01-20 15:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:03:07.622307662 +0000 UTC m=+23.350131110" watchObservedRunningTime="2026-01-20 15:03:07.623982851 +0000 UTC m=+23.351806248" Jan 20 15:03:08.187907 systemd-networkd[1315]: cni0: Gained IPv6LL Jan 20 15:03:08.315897 systemd-networkd[1315]: veth571fbb1f: Gained IPv6LL Jan 20 15:03:08.471732 kubelet[2784]: E0120 15:03:08.471437 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:08.472280 containerd[1623]: time="2026-01-20T15:03:08.472071823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmcg4,Uid:3f921a47-d0d0-4df9-8764-f71de2db1834,Namespace:kube-system,Attempt:0,}" Jan 20 15:03:08.507916 systemd-networkd[1315]: veth6705ecf7: Link UP Jan 20 15:03:08.514674 kernel: cni0: port 2(veth6705ecf7) entered blocking state Jan 20 15:03:08.514767 kernel: cni0: port 2(veth6705ecf7) entered disabled state Jan 20 15:03:08.514788 kernel: veth6705ecf7: entered allmulticast mode Jan 20 15:03:08.518036 kernel: veth6705ecf7: entered promiscuous mode Jan 20 15:03:08.532306 kernel: cni0: port 2(veth6705ecf7) entered blocking state Jan 20 15:03:08.532371 kernel: cni0: port 2(veth6705ecf7) entered forwarding state Jan 20 15:03:08.532588 systemd-networkd[1315]: veth6705ecf7: Gained carrier Jan 20 15:03:08.537516 containerd[1623]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 20 15:03:08.537516 containerd[1623]: delegateAdd: netconf sent to delegate plugin: Jan 20 15:03:08.592002 containerd[1623]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T15:03:08.591913474Z" level=info msg="connecting to shim 12bbd8e69828886b4f0054002b351693727d07a3e09173ad74482c4d3c33f28d" address="unix:///run/containerd/s/05b9925c3aa82f5d2f2eacb94d738359fddf6815f83951099395aa957472007b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:03:08.610391 kubelet[2784]: E0120 15:03:08.610275 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:08.635938 systemd[1]: Started cri-containerd-12bbd8e69828886b4f0054002b351693727d07a3e09173ad74482c4d3c33f28d.scope - libcontainer container 12bbd8e69828886b4f0054002b351693727d07a3e09173ad74482c4d3c33f28d. Jan 20 15:03:08.656524 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:03:08.714018 containerd[1623]: time="2026-01-20T15:03:08.713895793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmcg4,Uid:3f921a47-d0d0-4df9-8764-f71de2db1834,Namespace:kube-system,Attempt:0,} returns sandbox id \"12bbd8e69828886b4f0054002b351693727d07a3e09173ad74482c4d3c33f28d\"" Jan 20 15:03:08.715281 kubelet[2784]: E0120 15:03:08.715186 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:08.717850 containerd[1623]: time="2026-01-20T15:03:08.717801492Z" level=info msg="CreateContainer within sandbox \"12bbd8e69828886b4f0054002b351693727d07a3e09173ad74482c4d3c33f28d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 15:03:08.733943 containerd[1623]: time="2026-01-20T15:03:08.733244987Z" level=info msg="Container a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:03:08.738414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20410428.mount: Deactivated successfully. Jan 20 15:03:08.743834 containerd[1623]: time="2026-01-20T15:03:08.743755726Z" level=info msg="CreateContainer within sandbox \"12bbd8e69828886b4f0054002b351693727d07a3e09173ad74482c4d3c33f28d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69\"" Jan 20 15:03:08.744464 containerd[1623]: time="2026-01-20T15:03:08.744431661Z" level=info msg="StartContainer for \"a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69\"" Jan 20 15:03:08.745673 containerd[1623]: time="2026-01-20T15:03:08.745561638Z" level=info msg="connecting to shim a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69" address="unix:///run/containerd/s/05b9925c3aa82f5d2f2eacb94d738359fddf6815f83951099395aa957472007b" protocol=ttrpc version=3 Jan 20 15:03:08.780867 systemd[1]: Started cri-containerd-a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69.scope - libcontainer container a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69. Jan 20 15:03:08.836174 containerd[1623]: time="2026-01-20T15:03:08.835996891Z" level=info msg="StartContainer for \"a70c59fc6bb58a9bdaee3c1297e51b92753da76cb1cb538ae8fdefea477b8e69\" returns successfully" Jan 20 15:03:09.615844 kubelet[2784]: E0120 15:03:09.615702 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:09.615844 kubelet[2784]: E0120 15:03:09.615702 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:09.630939 kubelet[2784]: I0120 15:03:09.630746 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cmcg4" podStartSLOduration=21.630724188 podStartE2EDuration="21.630724188s" podCreationTimestamp="2026-01-20 15:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:03:09.630675679 +0000 UTC m=+25.358499076" watchObservedRunningTime="2026-01-20 15:03:09.630724188 +0000 UTC m=+25.358547585" Jan 20 15:03:10.363931 systemd-networkd[1315]: veth6705ecf7: Gained IPv6LL Jan 20 15:03:19.616452 kubelet[2784]: E0120 15:03:19.616204 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:19.645130 kubelet[2784]: E0120 15:03:19.644211 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:30.000815 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:39866.service - OpenSSH per-connection server daemon (10.0.0.1:39866). Jan 20 15:03:30.096114 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 39866 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:30.098425 sshd-session[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:30.105403 systemd-logind[1598]: New session 7 of user core. Jan 20 15:03:30.116970 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 15:03:30.245153 sshd[3784]: Connection closed by 10.0.0.1 port 39866 Jan 20 15:03:30.245506 sshd-session[3780]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:30.251522 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:39866.service: Deactivated successfully. Jan 20 15:03:30.254019 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 15:03:30.255473 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Jan 20 15:03:30.260994 systemd-logind[1598]: Removed session 7. Jan 20 15:03:35.258702 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:50150.service - OpenSSH per-connection server daemon (10.0.0.1:50150). Jan 20 15:03:35.328933 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 50150 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:35.331133 sshd-session[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:35.337261 systemd-logind[1598]: New session 8 of user core. Jan 20 15:03:35.350050 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 15:03:35.436664 sshd[3824]: Connection closed by 10.0.0.1 port 50150 Jan 20 15:03:35.437036 sshd-session[3820]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:35.441488 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:50150.service: Deactivated successfully. Jan 20 15:03:35.443840 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 15:03:35.445222 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Jan 20 15:03:35.447293 systemd-logind[1598]: Removed session 8. Jan 20 15:03:40.458036 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:50156.service - OpenSSH per-connection server daemon (10.0.0.1:50156). Jan 20 15:03:40.533914 sshd[3860]: Accepted publickey for core from 10.0.0.1 port 50156 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:40.536103 sshd-session[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:40.542336 systemd-logind[1598]: New session 9 of user core. Jan 20 15:03:40.548873 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 15:03:40.631098 sshd[3864]: Connection closed by 10.0.0.1 port 50156 Jan 20 15:03:40.631543 sshd-session[3860]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:40.645943 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:50156.service: Deactivated successfully. Jan 20 15:03:40.648053 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 15:03:40.649358 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Jan 20 15:03:40.652233 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:50168.service - OpenSSH per-connection server daemon (10.0.0.1:50168). Jan 20 15:03:40.653446 systemd-logind[1598]: Removed session 9. Jan 20 15:03:40.721452 sshd[3879]: Accepted publickey for core from 10.0.0.1 port 50168 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:40.724258 sshd-session[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:40.731857 systemd-logind[1598]: New session 10 of user core. Jan 20 15:03:40.741912 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 15:03:40.871537 sshd[3883]: Connection closed by 10.0.0.1 port 50168 Jan 20 15:03:40.871976 sshd-session[3879]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:40.891086 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:50168.service: Deactivated successfully. Jan 20 15:03:40.910237 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 15:03:40.912660 systemd-logind[1598]: Session 10 logged out. Waiting for processes to exit. Jan 20 15:03:40.916005 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:50176.service - OpenSSH per-connection server daemon (10.0.0.1:50176). Jan 20 15:03:40.919423 systemd-logind[1598]: Removed session 10. Jan 20 15:03:40.989881 sshd[3894]: Accepted publickey for core from 10.0.0.1 port 50176 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:40.992432 sshd-session[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:40.998772 systemd-logind[1598]: New session 11 of user core. Jan 20 15:03:41.007820 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 15:03:41.096658 sshd[3904]: Connection closed by 10.0.0.1 port 50176 Jan 20 15:03:41.096961 sshd-session[3894]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:41.102183 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:50176.service: Deactivated successfully. Jan 20 15:03:41.104506 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 15:03:41.106108 systemd-logind[1598]: Session 11 logged out. Waiting for processes to exit. Jan 20 15:03:41.107873 systemd-logind[1598]: Removed session 11. Jan 20 15:03:46.117889 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:35772.service - OpenSSH per-connection server daemon (10.0.0.1:35772). Jan 20 15:03:46.196079 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 35772 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:46.198761 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:46.206341 systemd-logind[1598]: New session 12 of user core. Jan 20 15:03:46.215952 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 15:03:46.325779 sshd[3959]: Connection closed by 10.0.0.1 port 35772 Jan 20 15:03:46.326529 sshd-session[3940]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:46.345834 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:35772.service: Deactivated successfully. Jan 20 15:03:46.348671 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 15:03:46.350281 systemd-logind[1598]: Session 12 logged out. Waiting for processes to exit. Jan 20 15:03:46.353929 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:35776.service - OpenSSH per-connection server daemon (10.0.0.1:35776). Jan 20 15:03:46.355498 systemd-logind[1598]: Removed session 12. Jan 20 15:03:46.431502 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 35776 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:46.434424 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:46.442144 systemd-logind[1598]: New session 13 of user core. Jan 20 15:03:46.457271 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 15:03:46.673085 sshd[3976]: Connection closed by 10.0.0.1 port 35776 Jan 20 15:03:46.673873 sshd-session[3972]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:46.684991 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:35776.service: Deactivated successfully. Jan 20 15:03:46.687390 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 15:03:46.688779 systemd-logind[1598]: Session 13 logged out. Waiting for processes to exit. Jan 20 15:03:46.692062 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:35788.service - OpenSSH per-connection server daemon (10.0.0.1:35788). Jan 20 15:03:46.692822 systemd-logind[1598]: Removed session 13. Jan 20 15:03:46.762768 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 35788 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:46.765536 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:46.775155 systemd-logind[1598]: New session 14 of user core. Jan 20 15:03:46.789004 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 15:03:47.427219 sshd[3991]: Connection closed by 10.0.0.1 port 35788 Jan 20 15:03:47.427768 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:47.442688 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:35788.service: Deactivated successfully. Jan 20 15:03:47.445343 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 15:03:47.448519 systemd-logind[1598]: Session 14 logged out. Waiting for processes to exit. Jan 20 15:03:47.455467 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:35804.service - OpenSSH per-connection server daemon (10.0.0.1:35804). Jan 20 15:03:47.460914 systemd-logind[1598]: Removed session 14. Jan 20 15:03:47.559988 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 35804 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:47.563230 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:47.571053 systemd-logind[1598]: New session 15 of user core. Jan 20 15:03:47.582887 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 15:03:47.856236 sshd[4014]: Connection closed by 10.0.0.1 port 35804 Jan 20 15:03:47.858876 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:47.869457 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:35804.service: Deactivated successfully. Jan 20 15:03:47.871948 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 15:03:47.873274 systemd-logind[1598]: Session 15 logged out. Waiting for processes to exit. Jan 20 15:03:47.877226 systemd-logind[1598]: Removed session 15. Jan 20 15:03:47.879185 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:35816.service - OpenSSH per-connection server daemon (10.0.0.1:35816). Jan 20 15:03:47.954254 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 35816 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:47.956915 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:47.965746 systemd-logind[1598]: New session 16 of user core. Jan 20 15:03:47.972009 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 15:03:48.061934 sshd[4030]: Connection closed by 10.0.0.1 port 35816 Jan 20 15:03:48.062354 sshd-session[4026]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:48.067149 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:35816.service: Deactivated successfully. Jan 20 15:03:48.069461 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 15:03:48.072661 systemd-logind[1598]: Session 16 logged out. Waiting for processes to exit. Jan 20 15:03:48.074124 systemd-logind[1598]: Removed session 16. Jan 20 15:03:53.078419 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:40166.service - OpenSSH per-connection server daemon (10.0.0.1:40166). Jan 20 15:03:53.171291 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 40166 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:53.174089 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:53.189276 systemd-logind[1598]: New session 17 of user core. Jan 20 15:03:53.210956 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 15:03:53.339080 sshd[4070]: Connection closed by 10.0.0.1 port 40166 Jan 20 15:03:53.339411 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:53.348257 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:40166.service: Deactivated successfully. Jan 20 15:03:53.352161 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 15:03:53.354908 systemd-logind[1598]: Session 17 logged out. Waiting for processes to exit. Jan 20 15:03:53.356552 systemd-logind[1598]: Removed session 17. Jan 20 15:03:57.472489 kubelet[2784]: E0120 15:03:57.472364 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:03:58.358511 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:40170.service - OpenSSH per-connection server daemon (10.0.0.1:40170). Jan 20 15:03:58.429137 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 40170 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:03:58.431359 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:03:58.438063 systemd-logind[1598]: New session 18 of user core. Jan 20 15:03:58.451882 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 15:03:58.558908 sshd[4109]: Connection closed by 10.0.0.1 port 40170 Jan 20 15:03:58.559441 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jan 20 15:03:58.565385 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:40170.service: Deactivated successfully. Jan 20 15:03:58.568310 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 15:03:58.569482 systemd-logind[1598]: Session 18 logged out. Waiting for processes to exit. Jan 20 15:03:58.572070 systemd-logind[1598]: Removed session 18. Jan 20 15:04:00.471864 kubelet[2784]: E0120 15:04:00.471768 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:04:01.471985 kubelet[2784]: E0120 15:04:01.471863 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:04:03.574797 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:33836.service - OpenSSH per-connection server daemon (10.0.0.1:33836). Jan 20 15:04:03.635084 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 33836 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:04:03.637341 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:04:03.643271 systemd-logind[1598]: New session 19 of user core. Jan 20 15:04:03.655909 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 15:04:03.737789 sshd[4150]: Connection closed by 10.0.0.1 port 33836 Jan 20 15:04:03.738208 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jan 20 15:04:03.743349 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:33836.service: Deactivated successfully. Jan 20 15:04:03.745720 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 15:04:03.747270 systemd-logind[1598]: Session 19 logged out. Waiting for processes to exit. Jan 20 15:04:03.748760 systemd-logind[1598]: Removed session 19. Jan 20 15:04:08.755438 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:33842.service - OpenSSH per-connection server daemon (10.0.0.1:33842). Jan 20 15:04:08.825559 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 33842 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:04:08.827660 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:04:08.833586 systemd-logind[1598]: New session 20 of user core. Jan 20 15:04:08.843836 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 15:04:08.935188 sshd[4188]: Connection closed by 10.0.0.1 port 33842 Jan 20 15:04:08.935529 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jan 20 15:04:08.940508 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:33842.service: Deactivated successfully. Jan 20 15:04:08.942757 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 15:04:08.944044 systemd-logind[1598]: Session 20 logged out. Waiting for processes to exit. Jan 20 15:04:08.945541 systemd-logind[1598]: Removed session 20. Jan 20 15:04:12.472260 kubelet[2784]: E0120 15:04:12.472149 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:04:13.471547 kubelet[2784]: E0120 15:04:13.471447 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:04:13.949414 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:54360.service - OpenSSH per-connection server daemon (10.0.0.1:54360). Jan 20 15:04:14.031841 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 54360 ssh2: RSA SHA256:aQdO5BmgbKwi9SbZPK7cng78+d5Hi+OsrVsH0/FZrlQ Jan 20 15:04:14.035227 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:04:14.042202 systemd-logind[1598]: New session 21 of user core. Jan 20 15:04:14.052105 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 15:04:14.138952 sshd[4226]: Connection closed by 10.0.0.1 port 54360 Jan 20 15:04:14.139380 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jan 20 15:04:14.144117 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:54360.service: Deactivated successfully. Jan 20 15:04:14.146997 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 15:04:14.149267 systemd-logind[1598]: Session 21 logged out. Waiting for processes to exit. Jan 20 15:04:14.151399 systemd-logind[1598]: Removed session 21. Jan 20 15:04:14.471775 kubelet[2784]: E0120 15:04:14.471730 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"