Jan 17 00:27:01.752546 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:27:01.752666 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:01.752685 kernel: BIOS-provided physical RAM map: Jan 17 00:27:01.752695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:27:01.752704 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:27:01.752713 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:27:01.752841 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:27:01.752852 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:27:01.752862 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:27:01.752872 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:27:01.752887 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:27:01.754035 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:27:01.754096 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:27:01.754108 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:27:01.754157 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:27:01.754169 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:27:01.754184 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:27:01.754195 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:27:01.754205 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:27:01.754216 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:27:01.754226 kernel: NX (Execute Disable) protection: active Jan 17 00:27:01.754237 kernel: APIC: Static calls initialized Jan 17 00:27:01.754247 kernel: efi: EFI v2.7 by EDK II Jan 17 00:27:01.754256 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:27:01.754265 kernel: SMBIOS 2.8 present. Jan 17 00:27:01.754274 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:27:01.754284 kernel: Hypervisor detected: KVM Jan 17 00:27:01.754298 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:27:01.754309 kernel: kvm-clock: using sched offset of 16873650851 cycles Jan 17 00:27:01.754320 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:27:01.754330 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:27:01.754341 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:27:01.754351 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:27:01.754362 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:27:01.754372 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:27:01.754382 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:27:01.754397 kernel: Using GB pages for direct mapping Jan 17 00:27:01.754407 kernel: Secure boot disabled Jan 17 00:27:01.754418 kernel: ACPI: Early table checksum verification disabled Jan 17 00:27:01.754428 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:27:01.754444 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:27:01.754455 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:27:01.754466 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:27:01.754482 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:27:01.754493 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:27:01.754554 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:27:01.754568 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:27:01.754578 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:27:01.754588 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:27:01.754597 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:27:01.754614 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:27:01.754626 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:27:01.754636 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:27:01.754645 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:27:01.754655 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:27:01.754665 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:27:01.754675 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:27:01.754685 kernel: No NUMA configuration found Jan 17 00:27:01.754821 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:27:01.754840 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:27:01.754850 kernel: Zone ranges: Jan 17 00:27:01.754861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:27:01.754871 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:27:01.754881 kernel: Normal empty Jan 17 00:27:01.754891 kernel: Movable zone start for each node Jan 17 00:27:01.754964 kernel: Early memory node ranges Jan 17 00:27:01.754975 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:27:01.754985 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:27:01.755000 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:27:01.755010 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:27:01.755020 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:27:01.755030 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:27:01.755080 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:27:01.755091 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:27:01.755101 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:27:01.755111 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:27:01.755121 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:27:01.755135 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:27:01.755145 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:27:01.755156 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:27:01.755167 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:27:01.755179 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:27:01.755189 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:27:01.755199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:27:01.755208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:27:01.755221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:27:01.755236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:27:01.755247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:27:01.755259 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:27:01.755268 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:27:01.755280 kernel: TSC deadline timer available Jan 17 00:27:01.755292 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:27:01.755302 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:27:01.755313 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:27:01.755325 kernel: kvm-guest: setup PV sched yield Jan 17 00:27:01.755340 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:27:01.755351 kernel: Booting paravirtualized kernel on KVM Jan 17 00:27:01.755362 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:27:01.755374 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:27:01.755385 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:27:01.755396 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:27:01.755407 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:27:01.755418 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:27:01.755429 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:27:01.755446 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:01.755507 kernel: random: crng init done Jan 17 00:27:01.755520 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:27:01.755531 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:27:01.755542 kernel: Fallback order for Node 0: 0 Jan 17 00:27:01.755553 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:27:01.755564 kernel: Policy zone: DMA32 Jan 17 00:27:01.755575 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:27:01.755587 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:27:01.755602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:27:01.755613 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:27:01.755624 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:27:01.755635 kernel: Dynamic Preempt: voluntary Jan 17 00:27:01.755646 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:27:01.755810 kernel: rcu: RCU event tracing is enabled. Jan 17 00:27:01.755838 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:27:01.755849 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:27:01.755859 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:27:01.755872 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:27:01.755883 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:27:01.755963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:27:01.755978 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:27:01.755990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:27:01.756001 kernel: Console: colour dummy device 80x25 Jan 17 00:27:01.756013 kernel: printk: console [ttyS0] enabled Jan 17 00:27:01.756077 kernel: ACPI: Core revision 20230628 Jan 17 00:27:01.756091 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:27:01.756103 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:27:01.756114 kernel: x2apic enabled Jan 17 00:27:01.756126 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:27:01.756137 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:27:01.756149 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:27:01.756161 kernel: kvm-guest: setup PV IPIs Jan 17 00:27:01.756173 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:27:01.756189 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:27:01.756201 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:27:01.756212 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:27:01.756224 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:27:01.756235 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:27:01.756247 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:27:01.756259 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:27:01.756270 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:27:01.756282 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:27:01.756298 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:27:01.756310 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:27:01.756322 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:27:01.756334 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:27:01.756346 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:27:01.756401 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:27:01.756415 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:27:01.756426 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:27:01.756442 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:27:01.756454 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:27:01.756466 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:27:01.756478 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:27:01.756489 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:27:01.756983 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:27:01.756998 kernel: landlock: Up and running. Jan 17 00:27:01.757010 kernel: SELinux: Initializing. Jan 17 00:27:01.757021 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:27:01.757039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:27:01.757051 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:27:01.757062 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:27:01.757074 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:27:01.757087 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:27:01.757099 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:27:01.757111 kernel: signal: max sigframe size: 1776 Jan 17 00:27:01.757122 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:27:01.757419 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:27:01.757445 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:27:01.757457 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:27:01.757468 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:27:01.757480 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:27:01.757492 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:27:01.757504 kernel: smpboot: Max logical packages: 1 Jan 17 00:27:01.757515 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:27:01.757527 kernel: devtmpfs: initialized Jan 17 00:27:01.757538 kernel: x86/mm: Memory block size: 128MB Jan 17 00:27:01.757554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:27:01.757566 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:27:01.757578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:27:01.757590 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:27:01.757602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:27:01.757614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:27:01.757625 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:27:01.757637 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:27:01.757648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:27:01.758252 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:27:01.758271 kernel: audit: type=2000 audit(1768609607.820:1): state=initialized audit_enabled=0 res=1 Jan 17 00:27:01.758283 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:27:01.758294 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:27:01.758306 kernel: cpuidle: using governor menu Jan 17 00:27:01.758318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:27:01.758330 kernel: dca service started, version 1.12.1 Jan 17 00:27:01.758341 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:27:01.758353 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:27:01.758371 kernel: PCI: Using configuration type 1 for base access Jan 17 00:27:01.758383 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:27:01.758395 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:27:01.758406 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:27:01.758418 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:27:01.758429 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:27:01.758441 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:27:01.758453 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:27:01.758465 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:27:01.758480 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:27:01.758492 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:27:01.758503 kernel: ACPI: Interpreter enabled Jan 17 00:27:01.758515 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:27:01.758527 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:27:01.758537 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:27:01.758549 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:27:01.758561 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:27:01.758574 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:27:01.761490 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:27:01.761843 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:27:01.762138 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:27:01.762157 kernel: PCI host bridge to bus 0000:00 Jan 17 00:27:01.762453 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:27:01.762965 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:27:01.763170 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:27:01.763347 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:27:01.763521 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:27:01.763709 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:27:01.764067 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:27:01.764287 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:27:01.764642 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:27:01.765069 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:27:01.765265 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:27:01.765451 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:27:01.765633 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:27:01.766019 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:27:01.766226 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:27:01.766440 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:27:01.766640 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:27:01.767196 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:27:01.767993 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:27:01.770084 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:27:01.770343 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:27:01.770535 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:27:01.771019 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:27:01.771415 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:27:01.771610 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:27:01.771986 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:27:01.772186 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:27:01.772442 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:27:01.772632 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:27:01.773158 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:27:01.773371 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:27:01.773552 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:27:01.774065 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:27:01.774252 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:27:01.774271 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:27:01.774281 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:27:01.774297 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:27:01.774307 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:27:01.774316 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:27:01.774326 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:27:01.774335 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:27:01.774344 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:27:01.774354 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:27:01.774364 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:27:01.774373 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:27:01.774387 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:27:01.774396 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:27:01.774407 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:27:01.774420 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:27:01.774429 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:27:01.774439 kernel: iommu: Default domain type: Translated Jan 17 00:27:01.774448 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:27:01.774457 kernel: efivars: Registered efivars operations Jan 17 00:27:01.774467 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:27:01.774482 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:27:01.774494 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:27:01.774504 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:27:01.774514 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:27:01.774523 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:27:01.774696 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:27:01.799255 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:27:01.799468 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:27:01.799484 kernel: vgaarb: loaded Jan 17 00:27:01.799505 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:27:01.799516 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:27:01.799526 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:27:01.799538 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:27:01.799549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:27:01.799560 kernel: pnp: PnP ACPI init Jan 17 00:27:01.800068 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:27:01.800088 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:27:01.800106 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:27:01.800117 kernel: NET: Registered PF_INET protocol family Jan 17 00:27:01.800128 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:27:01.800139 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:27:01.800150 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:27:01.800162 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:27:01.800173 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:27:01.800184 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:27:01.800195 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:27:01.800210 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:27:01.800221 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:27:01.800232 kernel: NET: Registered PF_XDP protocol family Jan 17 00:27:01.800421 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:27:01.800605 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:27:01.800996 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:27:01.801181 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:27:01.801350 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:27:01.801533 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:27:01.801701 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:27:01.806510 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:27:01.806532 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:27:01.806544 kernel: Initialise system trusted keyrings Jan 17 00:27:01.806555 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:27:01.806566 kernel: Key type asymmetric registered Jan 17 00:27:01.806576 kernel: Asymmetric key parser 'x509' registered Jan 17 00:27:01.806593 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:27:01.806604 kernel: io scheduler mq-deadline registered Jan 17 00:27:01.806615 kernel: io scheduler kyber registered Jan 17 00:27:01.806626 kernel: io scheduler bfq registered Jan 17 00:27:01.806637 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:27:01.806649 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:27:01.806660 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:27:01.806671 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:27:01.806682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:27:01.806692 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:27:01.806707 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:27:01.806818 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:27:01.806834 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:27:01.807655 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:27:01.808071 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:27:01.810157 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:27:01.810356 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:26:58 UTC (1768609618) Jan 17 00:27:01.810544 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:27:01.810559 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:27:01.810570 kernel: efifb: probing for efifb Jan 17 00:27:01.810582 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:27:01.810594 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:27:01.810605 kernel: efifb: scrolling: redraw Jan 17 00:27:01.810616 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:27:01.810627 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:27:01.810639 kernel: fb0: EFI VGA frame buffer device Jan 17 00:27:01.810656 kernel: pstore: Using crash dump compression: deflate Jan 17 00:27:01.810667 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:27:01.810679 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:27:01.810690 kernel: Segment Routing with IPv6 Jan 17 00:27:01.810701 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:27:01.810713 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:27:01.810829 kernel: Key type dns_resolver registered Jan 17 00:27:01.810842 kernel: IPI shorthand broadcast: enabled Jan 17 00:27:01.810883 kernel: sched_clock: Marking stable (9184053689, 886925706)->(12278261161, -2207281766) Jan 17 00:27:01.810969 kernel: registered taskstats version 1 Jan 17 00:27:01.810988 kernel: Loading compiled-in X.509 certificates Jan 17 00:27:01.811000 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:27:01.811012 kernel: Key type .fscrypt registered Jan 17 00:27:01.811023 kernel: Key type fscrypt-provisioning registered Jan 17 00:27:01.811035 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:27:01.811047 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:27:01.811059 kernel: ima: No architecture policies found Jan 17 00:27:01.811070 kernel: clk: Disabling unused clocks Jan 17 00:27:01.811086 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:27:01.811098 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:27:01.811109 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:27:01.811121 kernel: Run /init as init process Jan 17 00:27:01.811133 kernel: with arguments: Jan 17 00:27:01.811145 kernel: /init Jan 17 00:27:01.811156 kernel: with environment: Jan 17 00:27:01.811168 kernel: HOME=/ Jan 17 00:27:01.811179 kernel: TERM=linux Jan 17 00:27:01.811241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:27:01.811263 systemd[1]: Detected virtualization kvm. Jan 17 00:27:01.811275 systemd[1]: Detected architecture x86-64. Jan 17 00:27:01.811287 systemd[1]: Running in initrd. Jan 17 00:27:01.811299 systemd[1]: No hostname configured, using default hostname. Jan 17 00:27:01.811311 systemd[1]: Hostname set to . Jan 17 00:27:01.811323 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:27:01.811340 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:27:01.811352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:01.811364 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:01.811378 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:27:01.811390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:27:01.811403 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:27:01.811419 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:27:01.811434 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:27:01.811446 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:27:01.811458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:01.811470 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:01.811483 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:27:01.812305 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:27:01.812321 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:27:01.812334 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:27:01.812347 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:27:01.812358 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:27:01.812369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:27:01.812382 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:27:01.812394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:01.812412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:01.812424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:01.812436 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:27:01.812448 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:27:01.812460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:27:01.812472 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:27:01.812484 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:27:01.812496 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:27:01.812507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:27:01.812523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:01.812567 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:27:01.812595 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:27:01.812609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:01.812626 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:27:01.812639 systemd-journald[194]: Journal started Jan 17 00:27:01.812666 systemd-journald[194]: Runtime Journal (/run/log/journal/37a5ff931ba947edab014497aff06a5d) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:27:01.833378 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:27:01.888409 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:27:01.907886 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:27:01.916679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:01.926414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:27:01.968571 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:02.000069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:27:02.016084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:27:02.066704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:02.070278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:02.127485 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:02.168312 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:27:02.209536 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:27:02.222520 kernel: Bridge firewalling registered Jan 17 00:27:02.223160 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:27:02.229450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:02.255986 dracut-cmdline[226]: dracut-dracut-053 Jan 17 00:27:02.269690 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:02.257993 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:27:02.339192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:02.379213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:27:02.484812 systemd-resolved[265]: Positive Trust Anchors: Jan 17 00:27:02.485196 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:27:02.485242 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:27:02.588311 systemd-resolved[265]: Defaulting to hostname 'linux'. Jan 17 00:27:02.606690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:27:02.607301 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:02.692332 kernel: SCSI subsystem initialized Jan 17 00:27:02.724621 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:27:02.781063 kernel: iscsi: registered transport (tcp) Jan 17 00:27:02.853162 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:27:02.853257 kernel: QLogic iSCSI HBA Driver Jan 17 00:27:03.100199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:27:03.138539 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:27:03.322675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:27:03.323097 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:27:03.329808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:27:03.490150 kernel: raid6: avx2x4 gen() 19026 MB/s Jan 17 00:27:03.509077 kernel: raid6: avx2x2 gen() 18745 MB/s Jan 17 00:27:03.532342 kernel: raid6: avx2x1 gen() 12237 MB/s Jan 17 00:27:03.532415 kernel: raid6: using algorithm avx2x4 gen() 19026 MB/s Jan 17 00:27:03.556229 kernel: raid6: .... xor() 2221 MB/s, rmw enabled Jan 17 00:27:03.556391 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:27:03.653435 kernel: xor: automatically using best checksumming function avx Jan 17 00:27:04.356296 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:27:04.398376 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:27:04.446155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:04.503242 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 17 00:27:04.518642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:04.562189 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:27:04.645701 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 17 00:27:04.859698 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:27:04.906574 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:27:05.180211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:05.266649 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:27:05.344356 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:27:05.356394 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:27:05.394516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:05.413458 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:27:05.439110 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:27:05.495179 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:27:05.509683 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:27:05.509829 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:27:05.523880 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:27:05.553045 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:27:05.553127 kernel: GPT:9289727 != 19775487 Jan 17 00:27:05.553148 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:27:05.558151 kernel: GPT:9289727 != 19775487 Jan 17 00:27:05.558610 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:27:05.607794 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:27:05.607836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:27:05.559079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:05.617641 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:05.622695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:27:05.623711 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:05.624114 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:05.685013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:05.825024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:05.858995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:05.922074 kernel: libata version 3.00 loaded. Jan 17 00:27:05.960984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:06.012520 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:27:06.023146 kernel: AES CTR mode by8 optimization enabled Jan 17 00:27:06.039472 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:27:06.087669 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (468) Jan 17 00:27:06.104678 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:27:06.105217 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Jan 17 00:27:06.105242 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:27:06.120977 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:27:06.121493 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:27:06.136305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:27:06.206615 kernel: scsi host0: ahci Jan 17 00:27:06.207183 kernel: scsi host1: ahci Jan 17 00:27:06.207465 kernel: scsi host2: ahci Jan 17 00:27:06.207855 kernel: scsi host3: ahci Jan 17 00:27:06.208211 kernel: scsi host4: ahci Jan 17 00:27:06.208552 kernel: scsi host5: ahci Jan 17 00:27:06.209609 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:27:06.209632 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:27:06.209650 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:27:06.209666 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:27:06.209692 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:27:06.209710 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:27:06.192039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:27:06.250523 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:27:06.266978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:27:06.325277 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:27:06.355067 disk-uuid[565]: Primary Header is updated. Jan 17 00:27:06.355067 disk-uuid[565]: Secondary Entries is updated. Jan 17 00:27:06.355067 disk-uuid[565]: Secondary Header is updated. Jan 17 00:27:06.395392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:27:06.407211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:27:06.433847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:27:06.567428 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:27:06.567499 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:27:06.567516 kernel: ata3.00: applying bridge limits Jan 17 00:27:06.594039 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:27:06.615131 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:27:06.631183 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:27:06.659715 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:27:06.659882 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:27:06.678354 kernel: ata3.00: configured for UDMA/100 Jan 17 00:27:06.700441 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:27:06.922564 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:27:06.925169 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:27:06.972302 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:27:07.438208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:27:07.446883 disk-uuid[567]: The operation has completed successfully. Jan 17 00:27:07.588578 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:27:07.589029 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:27:07.646638 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:27:07.687351 sh[599]: Success Jan 17 00:27:07.814163 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:27:08.122187 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:27:08.163540 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:27:08.197464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:27:08.328429 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:27:08.331286 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:08.331369 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:27:08.340521 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:27:08.345051 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:27:08.511270 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:27:08.526702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:27:08.613462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:27:08.654181 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:27:08.729613 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:08.729693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:08.729712 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:27:08.788192 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:27:08.854288 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:27:08.880100 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:08.936331 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:27:08.972571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:27:09.337536 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:27:09.377507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:27:09.406270 ignition[705]: Ignition 2.19.0 Jan 17 00:27:09.406359 ignition[705]: Stage: fetch-offline Jan 17 00:27:09.406625 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:09.406649 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:27:09.407063 ignition[705]: parsed url from cmdline: "" Jan 17 00:27:09.407070 ignition[705]: no config URL provided Jan 17 00:27:09.407080 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:27:09.443571 systemd-networkd[786]: lo: Link UP Jan 17 00:27:09.407099 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:27:09.443578 systemd-networkd[786]: lo: Gained carrier Jan 17 00:27:09.407223 ignition[705]: op(1): [started] loading QEMU firmware config module Jan 17 00:27:09.448707 systemd-networkd[786]: Enumeration completed Jan 17 00:27:09.407234 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:27:09.449004 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:27:09.453505 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:09.453512 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:27:09.520092 ignition[705]: op(1): [finished] loading QEMU firmware config module Jan 17 00:27:09.465342 systemd-networkd[786]: eth0: Link UP Jan 17 00:27:09.465350 systemd-networkd[786]: eth0: Gained carrier Jan 17 00:27:09.465367 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:09.468360 systemd[1]: Reached target network.target - Network. Jan 17 00:27:09.629406 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:27:09.767573 ignition[705]: parsing config with SHA512: 5d0175018979e01db43d18566072415cc907b7e1b7b4b56858ffccceb479d6effb67e2be6b5712fd6f10c252a78d3e20ca26cca4a25e8b68ae77c3ca29180d92 Jan 17 00:27:09.789677 unknown[705]: fetched base config from "system" Jan 17 00:27:09.789699 unknown[705]: fetched user config from "qemu" Jan 17 00:27:09.791252 ignition[705]: fetch-offline: fetch-offline passed Jan 17 00:27:09.792263 ignition[705]: Ignition finished successfully Jan 17 00:27:09.823028 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:27:09.842435 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:27:09.895068 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:27:10.026599 ignition[792]: Ignition 2.19.0 Jan 17 00:27:10.026653 ignition[792]: Stage: kargs Jan 17 00:27:10.027144 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:10.027399 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:27:10.029316 ignition[792]: kargs: kargs passed Jan 17 00:27:10.029381 ignition[792]: Ignition finished successfully Jan 17 00:27:10.071689 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:27:10.141338 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:27:10.318660 ignition[800]: Ignition 2.19.0 Jan 17 00:27:10.318714 ignition[800]: Stage: disks Jan 17 00:27:10.329910 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:27:10.319410 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:10.341559 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:27:10.319429 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:27:10.350856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:27:10.323304 ignition[800]: disks: disks passed Jan 17 00:27:10.351008 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:27:10.323382 ignition[800]: Ignition finished successfully Jan 17 00:27:10.351072 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:27:10.351122 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:27:10.432476 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:27:10.527913 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:27:10.543428 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:27:10.806458 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:27:11.374255 systemd-networkd[786]: eth0: Gained IPv6LL Jan 17 00:27:11.491078 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:27:11.493693 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:27:11.500999 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:27:11.563683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:27:11.583048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:27:11.602413 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Jan 17 00:27:11.605053 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:27:11.647141 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:11.647260 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:11.647278 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:27:11.647294 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:27:11.614902 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:27:11.615073 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:27:11.700268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:27:11.707316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:27:11.752279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:27:11.950190 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:27:11.987698 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:27:12.008636 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:27:12.034884 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:27:12.503019 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:27:12.564291 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:27:12.587281 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:27:12.645343 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:12.620345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:27:12.751629 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:27:13.772288 ignition[932]: INFO : Ignition 2.19.0 Jan 17 00:27:13.772288 ignition[932]: INFO : Stage: mount Jan 17 00:27:13.830316 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:13.830316 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:27:13.830316 ignition[932]: INFO : mount: mount passed Jan 17 00:27:13.830316 ignition[932]: INFO : Ignition finished successfully Jan 17 00:27:13.806316 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:27:13.898400 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:27:13.936268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:27:14.004856 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 17 00:27:14.033258 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:14.033342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:14.040214 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:27:14.093859 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:27:14.109489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:27:14.241787 ignition[962]: INFO : Ignition 2.19.0 Jan 17 00:27:14.241787 ignition[962]: INFO : Stage: files Jan 17 00:27:14.241787 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:14.241787 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:27:14.241787 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:27:14.281448 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:27:14.281448 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:27:14.304647 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:27:14.304647 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:27:14.304647 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:27:14.304647 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:27:14.304647 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:27:14.476611 kernel: hrtimer: interrupt took 17933219 ns Jan 17 00:27:14.287181 unknown[962]: wrote ssh authorized keys file for user: core Jan 17 00:27:14.604160 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:27:15.622040 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:27:15.622040 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:27:15.663622 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:27:15.663622 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:27:15.663622 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:27:15.663622 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:27:15.663622 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:27:15.663622 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:27:15.758434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:27:16.040601 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:27:17.046124 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:27:17.046124 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:27:17.065796 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:27:17.079182 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:27:17.079182 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:27:17.079182 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:27:17.079182 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:27:17.121428 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:27:17.121428 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:27:17.121428 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:27:17.296863 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:27:17.326410 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:27:17.338699 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:27:17.338699 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:27:17.338699 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:27:17.338699 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:27:17.338699 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:27:17.338699 ignition[962]: INFO : files: files passed Jan 17 00:27:17.338699 ignition[962]: INFO : Ignition finished successfully Jan 17 00:27:17.362812 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:27:17.413211 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:27:17.423547 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:27:17.435128 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:27:17.435273 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:27:17.472295 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:27:17.479264 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:17.479264 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:17.475342 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:27:17.531594 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:17.485851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:27:17.544197 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:27:17.599575 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:27:17.600022 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:27:17.621177 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:27:17.628850 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:27:17.635920 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:27:17.671471 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:27:17.732547 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:27:17.768581 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:27:17.833215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:17.833655 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:17.869479 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:27:17.870012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:27:17.870216 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:27:17.897160 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:27:17.906537 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:27:17.913290 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:27:17.922999 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:27:17.930110 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:27:17.939165 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:27:17.948066 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:27:17.966413 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:27:17.983247 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:27:17.992667 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:27:18.003865 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:27:18.004229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:27:18.025294 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:18.058051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:18.070565 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:27:18.071502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:18.085282 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:27:18.085595 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:27:18.097173 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:27:18.097431 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:27:18.109198 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:27:18.115248 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:27:18.116560 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:18.119019 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:27:18.119923 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:27:18.293014 ignition[1015]: INFO : Ignition 2.19.0 Jan 17 00:27:18.293014 ignition[1015]: INFO : Stage: umount Jan 17 00:27:18.293014 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:18.293014 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:27:18.293014 ignition[1015]: INFO : umount: umount passed Jan 17 00:27:18.293014 ignition[1015]: INFO : Ignition finished successfully Jan 17 00:27:18.127215 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:27:18.127452 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:27:18.129252 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:27:18.129399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:27:18.131087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:27:18.131377 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:27:18.138862 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:27:18.139169 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:27:18.203354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:27:18.216654 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:27:18.217164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:18.238916 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:27:18.254375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:27:18.254851 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:18.268411 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:27:18.268537 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:27:18.293040 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:27:18.294503 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:27:18.295161 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:27:18.305487 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:27:18.306248 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:27:18.321209 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:27:18.321405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:27:18.338013 systemd[1]: Stopped target network.target - Network. Jan 17 00:27:18.351899 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:27:18.352089 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:27:18.365839 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:27:18.366026 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:27:18.378836 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:27:18.378918 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:27:18.385293 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:27:18.385358 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:27:18.393243 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:27:18.393568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:27:18.406076 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:27:18.420640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:27:18.436627 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:27:18.437129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:27:18.444384 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 17 00:27:18.466572 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:27:18.467552 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:27:18.486592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:27:18.486687 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:18.564426 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:27:18.580923 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:27:18.581123 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:27:18.610886 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:27:18.611075 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:18.630035 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:27:18.630154 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:18.645302 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:27:18.645431 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:18.673309 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:18.730664 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:27:18.731928 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:19.169523 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:27:18.745360 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:27:18.745534 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:27:18.765543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:27:18.765646 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:18.786369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:27:18.786442 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:18.798100 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:27:18.798187 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:27:18.815150 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:27:18.815257 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:27:18.830403 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:27:18.830480 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:18.868350 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:27:18.895047 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:27:18.895167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:18.910381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:27:18.910481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:18.913467 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:27:18.913685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:27:18.929069 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:27:18.936140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:27:18.980012 systemd[1]: Switching root. Jan 17 00:27:19.398673 systemd-journald[194]: Journal stopped Jan 17 00:27:24.512888 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:27:24.518632 kernel: SELinux: policy capability open_perms=1 Jan 17 00:27:24.518671 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:27:24.518691 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:27:24.518803 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:27:24.518826 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:27:24.518854 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:27:24.518878 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:27:24.518898 kernel: audit: type=1403 audit(1768609639.602:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:27:24.518922 systemd[1]: Successfully loaded SELinux policy in 110.730ms. Jan 17 00:27:24.519642 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.409ms. Jan 17 00:27:24.519675 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:27:24.519698 systemd[1]: Detected virtualization kvm. Jan 17 00:27:24.519891 systemd[1]: Detected architecture x86-64. Jan 17 00:27:24.519914 systemd[1]: Detected first boot. Jan 17 00:27:24.519932 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:27:24.519949 zram_generator::config[1059]: No configuration found. Jan 17 00:27:24.520034 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:27:24.520057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:27:24.520078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:27:24.520099 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:27:24.520130 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:27:24.520154 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:27:24.520185 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:27:24.520205 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:27:24.520226 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:27:24.520246 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:27:24.520268 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:27:24.520292 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:27:24.520319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:24.520342 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:24.520363 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:27:24.520384 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:27:24.520406 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:27:24.520428 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:27:24.520449 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:27:24.520469 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:24.520489 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:27:24.520516 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:27:24.520538 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:27:24.520558 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:27:24.520579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:24.520600 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:27:24.520620 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:27:24.520641 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:27:24.520661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:27:24.520689 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:27:24.520710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:24.521231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:24.521271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:24.521290 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:27:24.521307 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:27:24.521324 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:27:24.521343 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:27:24.521361 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:24.521445 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:27:24.521464 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:27:24.521481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:27:24.521500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:27:24.521517 systemd[1]: Reached target machines.target - Containers. Jan 17 00:27:24.521535 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:27:24.521552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:27:24.521570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:27:24.521592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:27:24.521609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:27:24.521627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:27:24.521643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:27:24.521661 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:27:24.521678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:27:24.521697 kernel: fuse: init (API version 7.39) Jan 17 00:27:24.521823 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:27:24.521851 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:27:24.521879 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:27:24.521899 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:27:24.521920 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:27:24.521937 kernel: loop: module loaded Jan 17 00:27:24.522021 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:27:24.522040 kernel: ACPI: bus type drm_connector registered Jan 17 00:27:24.522057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:27:24.522074 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:27:24.522091 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:27:24.522208 systemd-journald[1144]: Collecting audit messages is disabled. Jan 17 00:27:24.522243 systemd-journald[1144]: Journal started Jan 17 00:27:24.522272 systemd-journald[1144]: Runtime Journal (/run/log/journal/37a5ff931ba947edab014497aff06a5d) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:27:21.352437 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:27:21.396473 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:27:21.402935 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:27:21.403627 systemd[1]: systemd-journald.service: Consumed 3.235s CPU time. Jan 17 00:27:24.550368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:27:24.569230 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:27:24.572063 systemd[1]: Stopped verity-setup.service. Jan 17 00:27:24.613246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:24.626406 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:27:24.652018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:27:24.672166 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:27:24.715258 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:27:24.734233 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:27:24.747156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:27:24.760237 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:27:24.799867 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:27:24.829505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:24.865313 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:27:24.865795 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:27:24.914477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:27:24.917555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:27:24.935525 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:27:24.938147 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:27:24.963941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:27:24.964377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:27:25.002613 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:27:25.007183 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:27:25.035172 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:27:25.035438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:27:25.060686 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:25.105059 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:27:25.119003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:27:25.202637 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:27:25.239022 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:27:25.305888 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:27:25.311055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:27:25.311141 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:27:25.329148 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:27:25.354225 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:27:25.372388 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:27:25.407451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:27:25.419466 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:27:25.433699 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:27:25.453838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:27:25.473360 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:27:25.500285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:27:25.505320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:27:25.514914 systemd-journald[1144]: Time spent on flushing to /var/log/journal/37a5ff931ba947edab014497aff06a5d is 126.060ms for 981 entries. Jan 17 00:27:25.514914 systemd-journald[1144]: System Journal (/var/log/journal/37a5ff931ba947edab014497aff06a5d) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:27:25.749543 systemd-journald[1144]: Received client request to flush runtime journal. Jan 17 00:27:25.749593 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:27:25.534190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:27:25.568511 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:27:25.619554 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:25.628051 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:27:25.636061 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:27:25.646712 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:27:25.671264 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:27:25.718698 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:27:25.825209 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:27:25.966471 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:27:26.421457 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:27:28.071944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:29.861929 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:27:29.872156 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:27:29.954892 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:27:30.003650 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:27:30.012892 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:27:30.084496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:27:30.112265 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:27:30.343156 kernel: loop2: detected capacity change from 0 to 219144 Jan 17 00:27:30.423276 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 17 00:27:30.423301 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 17 00:27:30.461579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:30.508655 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:27:30.630310 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:27:30.710815 kernel: loop5: detected capacity change from 0 to 219144 Jan 17 00:27:30.796538 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:27:30.798018 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 17 00:27:30.868131 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:27:30.868226 systemd[1]: Reloading... Jan 17 00:27:31.261070 zram_generator::config[1223]: No configuration found. Jan 17 00:27:32.138708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:32.154785 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:27:32.333046 systemd[1]: Reloading finished in 1463 ms. Jan 17 00:27:32.696900 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:27:32.706565 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:27:32.758255 systemd[1]: Starting ensure-sysext.service... Jan 17 00:27:32.773127 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:27:32.801209 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:27:32.839483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:32.841344 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:27:32.842519 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:27:32.845022 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:27:32.845526 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 17 00:27:32.845675 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 17 00:27:32.847108 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:27:32.847176 systemd[1]: Reloading... Jan 17 00:27:32.852964 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:27:32.853043 systemd-tmpfiles[1264]: Skipping /boot Jan 17 00:27:32.888565 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:27:32.889319 systemd-tmpfiles[1264]: Skipping /boot Jan 17 00:27:33.004453 systemd-udevd[1267]: Using default interface naming scheme 'v255'. Jan 17 00:27:33.168156 zram_generator::config[1293]: No configuration found. Jan 17 00:27:33.638829 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1321) Jan 17 00:27:33.653836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:27:33.704883 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:27:33.728426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:34.123041 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:27:34.123537 systemd[1]: Reloading finished in 1274 ms. Jan 17 00:27:34.143861 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:27:34.160703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:34.763972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:34.851370 systemd[1]: Finished ensure-sysext.service. Jan 17 00:27:34.932107 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:27:35.008656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:35.114934 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:27:35.161144 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:27:35.171261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:27:35.174672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:27:35.196879 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:27:35.213194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:27:35.228549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:27:35.242273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:27:35.248834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:27:35.265542 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:27:35.306697 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:27:35.330395 augenrules[1386]: No rules Jan 17 00:27:35.331436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:27:35.368225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:27:35.388804 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:27:35.408294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:35.418830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:35.420519 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:27:35.436395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:27:35.436715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:27:35.592327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:27:35.592682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:27:35.603565 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:27:35.616657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:27:35.618337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:27:35.627624 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:27:35.628883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:27:35.639514 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:27:35.651157 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:27:35.700789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:27:35.701901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:27:35.780031 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:27:35.864134 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:27:35.915888 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:27:35.980876 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:27:36.007080 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:27:36.205690 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:27:36.214090 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:27:36.214600 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:27:36.219135 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:27:36.227404 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:27:36.243301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:36.560848 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:27:36.564618 kernel: kvm_amd: TSC scaling supported Jan 17 00:27:36.564685 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:27:36.564711 kernel: kvm_amd: Nested Paging enabled Jan 17 00:27:36.564821 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:27:36.602176 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:27:37.341301 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:27:37.365481 systemd-networkd[1383]: lo: Link UP Jan 17 00:27:37.365497 systemd-networkd[1383]: lo: Gained carrier Jan 17 00:27:37.369490 systemd-networkd[1383]: Enumeration completed Jan 17 00:27:37.369672 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:27:37.371503 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:37.371804 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:27:37.411101 systemd-networkd[1383]: eth0: Link UP Jan 17 00:27:37.411239 systemd-networkd[1383]: eth0: Gained carrier Jan 17 00:27:37.411272 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:37.420957 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:27:37.428394 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:27:37.436226 systemd-resolved[1391]: Positive Trust Anchors: Jan 17 00:27:37.436711 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:27:37.436934 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:27:37.440099 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:27:37.440137 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:27:37.443123 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Jan 17 00:27:37.453470 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:27:37.455339 systemd-resolved[1391]: Defaulting to hostname 'linux'. Jan 17 00:27:37.466350 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:27:37.466531 systemd-timesyncd[1392]: Initial clock synchronization to Sat 2026-01-17 00:27:37.840000 UTC. Jan 17 00:27:37.504402 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:27:37.516517 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:27:37.532611 systemd[1]: Reached target network.target - Network. Jan 17 00:27:37.542416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:37.604021 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:27:37.666215 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:27:37.936673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:37.969576 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:27:38.118687 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:27:38.203327 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:27:38.227995 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:27:38.240756 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:27:38.258486 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:27:38.316252 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:27:38.316639 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:27:38.321995 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:27:38.330258 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:27:38.341270 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:27:38.370872 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:27:38.385928 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:27:38.432134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:27:38.448250 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:27:38.456093 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:27:38.461352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:27:38.461444 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:27:38.468464 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:27:38.476694 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:27:38.489162 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:27:38.546210 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:27:38.554585 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:27:38.561574 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:27:38.573723 jq[1432]: false Jan 17 00:27:38.576123 systemd-networkd[1383]: eth0: Gained IPv6LL Jan 17 00:27:38.580025 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:27:38.599093 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:27:38.649396 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:27:38.672245 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:27:38.686878 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:27:38.696201 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:27:38.697219 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:27:38.710122 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:27:38.725344 extend-filesystems[1433]: Found loop3 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found loop4 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found loop5 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found sr0 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda1 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda2 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda3 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found usr Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda4 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda6 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda7 Jan 17 00:27:38.733636 extend-filesystems[1433]: Found vda9 Jan 17 00:27:38.733636 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 17 00:27:38.960286 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1307) Jan 17 00:27:38.960356 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:27:38.729001 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:27:38.961655 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 17 00:27:38.762603 dbus-daemon[1431]: [system] SELinux support is enabled Jan 17 00:27:38.758098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:27:38.992970 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:27:38.777123 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:27:39.100545 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:27:39.100662 jq[1449]: true Jan 17 00:27:38.817128 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:27:39.101200 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:27:39.101200 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:27:39.101200 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:27:39.222091 update_engine[1446]: I20260117 00:27:39.078409 1446 main.cc:92] Flatcar Update Engine starting Jan 17 00:27:39.222091 update_engine[1446]: I20260117 00:27:39.085238 1446 update_check_scheduler.cc:74] Next update check in 10m55s Jan 17 00:27:39.222671 tar[1457]: linux-amd64/LICENSE Jan 17 00:27:39.222671 tar[1457]: linux-amd64/helm Jan 17 00:27:38.852647 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:27:39.226120 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 17 00:27:39.235311 jq[1458]: true Jan 17 00:27:38.853143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:27:38.853757 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:27:38.854210 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:27:38.856584 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:27:38.857214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:27:39.015919 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:27:39.049118 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:27:39.061013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:39.069441 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:27:39.076947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:27:39.076990 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:27:39.087201 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:27:39.087225 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:27:39.093079 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:27:39.093182 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:27:39.103065 systemd-logind[1444]: New seat seat0. Jan 17 00:27:39.114220 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:27:39.139478 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:27:39.139915 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:27:39.195424 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:27:39.269333 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:27:39.275647 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:27:39.344506 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:27:39.363473 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:27:39.364088 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:27:39.397589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:27:39.754945 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:27:39.762468 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:27:39.773050 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:27:39.829065 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:27:39.838445 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:27:40.058967 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:27:40.108351 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:27:40.217172 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:27:40.217652 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:27:40.289078 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:27:40.487701 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:27:40.522991 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:27:40.569271 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:27:40.584253 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:27:42.198107 containerd[1476]: time="2026-01-17T00:27:42.197528940Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:27:42.617119 containerd[1476]: time="2026-01-17T00:27:42.615234566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.638457 containerd[1476]: time="2026-01-17T00:27:42.638387957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:42.638693 containerd[1476]: time="2026-01-17T00:27:42.638673296Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:27:42.638871 containerd[1476]: time="2026-01-17T00:27:42.638848693Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:27:42.639465 containerd[1476]: time="2026-01-17T00:27:42.639433843Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:27:42.639655 containerd[1476]: time="2026-01-17T00:27:42.639627520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.648373 containerd[1476]: time="2026-01-17T00:27:42.647475084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:42.648373 containerd[1476]: time="2026-01-17T00:27:42.647511285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.648373 containerd[1476]: time="2026-01-17T00:27:42.648123517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.650157451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.650189144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.650264140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.650550003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.651482201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.651718245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.651742568Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.652093382Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:27:42.653169 containerd[1476]: time="2026-01-17T00:27:42.652326339Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.690072902Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.690253137Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.690353970Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.690387577Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.690414093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.690903361Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.691923544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692406966Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692437198Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692458659Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692479976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692502055Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692578935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.693601 containerd[1476]: time="2026-01-17T00:27:42.692645214Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.692673603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.692871376Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.692941278Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.692963429Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693151703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693215757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693240678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693261459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693279555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693297548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.694232 containerd[1476]: time="2026-01-17T00:27:42.693314850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702190112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702272653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702306775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702329255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702349399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702370191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702475964Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702519063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702578609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.702669395Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.703012365Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.703043780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:27:42.706144 containerd[1476]: time="2026-01-17T00:27:42.703059416Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:27:42.706578 containerd[1476]: time="2026-01-17T00:27:42.703135741Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:27:42.706578 containerd[1476]: time="2026-01-17T00:27:42.703177892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706578 containerd[1476]: time="2026-01-17T00:27:42.703195771Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:27:42.706578 containerd[1476]: time="2026-01-17T00:27:42.703210676Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:27:42.706578 containerd[1476]: time="2026-01-17T00:27:42.703252476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:27:42.706926 containerd[1476]: time="2026-01-17T00:27:42.704125446Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:27:42.706926 containerd[1476]: time="2026-01-17T00:27:42.704256994Z" level=info msg="Connect containerd service" Jan 17 00:27:42.706926 containerd[1476]: time="2026-01-17T00:27:42.704429272Z" level=info msg="using legacy CRI server" Jan 17 00:27:42.706926 containerd[1476]: time="2026-01-17T00:27:42.704444290Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:27:42.706926 containerd[1476]: time="2026-01-17T00:27:42.705085044Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:27:42.725574 containerd[1476]: time="2026-01-17T00:27:42.725038112Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:27:42.727826 containerd[1476]: time="2026-01-17T00:27:42.726629292Z" level=info msg="Start subscribing containerd event" Jan 17 00:27:42.727826 containerd[1476]: time="2026-01-17T00:27:42.726878389Z" level=info msg="Start recovering state" Jan 17 00:27:42.727826 containerd[1476]: time="2026-01-17T00:27:42.727082525Z" level=info msg="Start event monitor" Jan 17 00:27:42.727826 containerd[1476]: time="2026-01-17T00:27:42.727157120Z" level=info msg="Start snapshots syncer" Jan 17 00:27:42.727826 containerd[1476]: time="2026-01-17T00:27:42.727220805Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:27:42.727826 containerd[1476]: time="2026-01-17T00:27:42.727334431Z" level=info msg="Start streaming server" Jan 17 00:27:42.739368 containerd[1476]: time="2026-01-17T00:27:42.739241225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:27:42.739501 containerd[1476]: time="2026-01-17T00:27:42.739413936Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:27:42.739864 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:27:42.740464 containerd[1476]: time="2026-01-17T00:27:42.740099228Z" level=info msg="containerd successfully booted in 0.550503s" Jan 17 00:27:43.826844 tar[1457]: linux-amd64/README.md Jan 17 00:27:43.941660 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:27:48.519010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:27:48.563407 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:55958.service - OpenSSH per-connection server daemon (10.0.0.1:55958). Jan 17 00:27:49.168122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:49.180707 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:27:49.187102 systemd[1]: Startup finished in 10.179s (kernel) + 19.682s (initrd) + 29.692s (userspace) = 59.555s. Jan 17 00:27:49.198178 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:49.592669 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 55958 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:49.612724 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:49.649396 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:27:50.137162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:27:50.144226 systemd-logind[1444]: New session 1 of user core. Jan 17 00:27:50.437476 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:27:50.454209 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:27:50.541024 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:27:51.935483 systemd[1556]: Queued start job for default target default.target. Jan 17 00:27:51.971466 systemd[1556]: Created slice app.slice - User Application Slice. Jan 17 00:27:51.971552 systemd[1556]: Reached target paths.target - Paths. Jan 17 00:27:51.971572 systemd[1556]: Reached target timers.target - Timers. Jan 17 00:27:52.018876 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:27:52.134915 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:27:52.136433 systemd[1556]: Reached target sockets.target - Sockets. Jan 17 00:27:52.136464 systemd[1556]: Reached target basic.target - Basic System. Jan 17 00:27:52.136605 systemd[1556]: Reached target default.target - Main User Target. Jan 17 00:27:52.136682 systemd[1556]: Startup finished in 1.545s. Jan 17 00:27:52.137223 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:27:52.165148 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:27:52.329056 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:55964.service - OpenSSH per-connection server daemon (10.0.0.1:55964). Jan 17 00:27:52.536216 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 55964 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:52.548940 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:52.582676 systemd-logind[1444]: New session 2 of user core. Jan 17 00:27:52.621322 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:27:53.144432 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:53.171417 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:55964.service: Deactivated successfully. Jan 17 00:27:53.187604 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:27:53.212289 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:27:53.478570 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:43444.service - OpenSSH per-connection server daemon (10.0.0.1:43444). Jan 17 00:27:53.488321 systemd-logind[1444]: Removed session 2. Jan 17 00:27:53.573982 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 43444 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:53.581871 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:53.616181 systemd-logind[1444]: New session 3 of user core. Jan 17 00:27:53.634156 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:27:53.820192 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:53.850480 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:43444.service: Deactivated successfully. Jan 17 00:27:53.855633 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:27:53.860884 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:27:53.882141 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:43448.service - OpenSSH per-connection server daemon (10.0.0.1:43448). Jan 17 00:27:53.884171 systemd-logind[1444]: Removed session 3. Jan 17 00:27:54.103936 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 43448 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:54.110081 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:54.132347 systemd-logind[1444]: New session 4 of user core. Jan 17 00:27:54.151189 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:27:54.362297 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:54.392399 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:43448.service: Deactivated successfully. Jan 17 00:27:54.395018 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:27:54.413174 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:27:54.427242 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:43452.service - OpenSSH per-connection server daemon (10.0.0.1:43452). Jan 17 00:27:54.432330 systemd-logind[1444]: Removed session 4. Jan 17 00:27:54.559604 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 43452 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:54.569958 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:54.590915 systemd-logind[1444]: New session 5 of user core. Jan 17 00:27:54.603205 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:27:55.393701 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:27:55.448448 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:55.986567 kubelet[1552]: E0117 00:27:55.985147 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:55.995336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:55.995647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:55.997895 systemd[1]: kubelet.service: Consumed 9.647s CPU time. Jan 17 00:28:04.719188 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:28:04.726153 (dockerd)[1617]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:28:06.358688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:28:06.395162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:09.607975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:09.620041 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:09.694879 dockerd[1617]: time="2026-01-17T00:28:09.694434523Z" level=info msg="Starting up" Jan 17 00:28:09.879666 kubelet[1634]: E0117 00:28:09.874398 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:09.892425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:09.892817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:09.893484 systemd[1]: kubelet.service: Consumed 1.636s CPU time. Jan 17 00:28:10.186224 dockerd[1617]: time="2026-01-17T00:28:10.183688034Z" level=info msg="Loading containers: start." Jan 17 00:28:10.628182 kernel: Initializing XFRM netlink socket Jan 17 00:28:10.962321 systemd-networkd[1383]: docker0: Link UP Jan 17 00:28:11.024219 dockerd[1617]: time="2026-01-17T00:28:11.024045116Z" level=info msg="Loading containers: done." Jan 17 00:28:11.084613 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck861302220-merged.mount: Deactivated successfully. Jan 17 00:28:11.090877 dockerd[1617]: time="2026-01-17T00:28:11.089611638Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:28:11.090877 dockerd[1617]: time="2026-01-17T00:28:11.089867791Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:28:11.090877 dockerd[1617]: time="2026-01-17T00:28:11.090170101Z" level=info msg="Daemon has completed initialization" Jan 17 00:28:11.199464 dockerd[1617]: time="2026-01-17T00:28:11.198013971Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:28:11.199598 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:28:12.655246 containerd[1476]: time="2026-01-17T00:28:12.655141288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:28:13.467709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067961347.mount: Deactivated successfully. Jan 17 00:28:16.200009 containerd[1476]: time="2026-01-17T00:28:16.199151973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:16.203558 containerd[1476]: time="2026-01-17T00:28:16.203311237Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 17 00:28:16.208340 containerd[1476]: time="2026-01-17T00:28:16.208247976Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:16.219271 containerd[1476]: time="2026-01-17T00:28:16.218812997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:16.220613 containerd[1476]: time="2026-01-17T00:28:16.220406437Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.565168567s" Jan 17 00:28:16.220613 containerd[1476]: time="2026-01-17T00:28:16.220483895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:28:16.223247 containerd[1476]: time="2026-01-17T00:28:16.223150441Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:28:20.208843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:28:23.089150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:24.634320 update_engine[1446]: I20260117 00:28:24.633260 1446 update_attempter.cc:509] Updating boot flags... Jan 17 00:28:24.731249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:24.744238 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:24.888406 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1865) Jan 17 00:28:24.976613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1864) Jan 17 00:28:25.057517 kubelet[1854]: E0117 00:28:25.056957 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:25.070173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:25.070519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:25.071141 systemd[1]: kubelet.service: Consumed 1.583s CPU time. Jan 17 00:28:25.150155 containerd[1476]: time="2026-01-17T00:28:25.149685127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:25.152392 containerd[1476]: time="2026-01-17T00:28:25.152040060Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 17 00:28:25.156448 containerd[1476]: time="2026-01-17T00:28:25.156082208Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:25.162777 containerd[1476]: time="2026-01-17T00:28:25.162476309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:25.166186 containerd[1476]: time="2026-01-17T00:28:25.165556626Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 8.942135496s" Jan 17 00:28:25.166186 containerd[1476]: time="2026-01-17T00:28:25.165818579Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:28:25.173889 containerd[1476]: time="2026-01-17T00:28:25.173851909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:28:26.611476 containerd[1476]: time="2026-01-17T00:28:26.611344220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:26.614917 containerd[1476]: time="2026-01-17T00:28:26.614010889Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 17 00:28:26.616534 containerd[1476]: time="2026-01-17T00:28:26.616478411Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:26.621928 containerd[1476]: time="2026-01-17T00:28:26.621611776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:26.623540 containerd[1476]: time="2026-01-17T00:28:26.623439704Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.449391064s" Jan 17 00:28:26.623540 containerd[1476]: time="2026-01-17T00:28:26.623525611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:28:26.626201 containerd[1476]: time="2026-01-17T00:28:26.626132921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:28:28.086423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275264356.mount: Deactivated successfully. Jan 17 00:28:29.662913 containerd[1476]: time="2026-01-17T00:28:29.661331903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:29.666571 containerd[1476]: time="2026-01-17T00:28:29.666260715Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:28:29.671241 containerd[1476]: time="2026-01-17T00:28:29.669470818Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:29.674208 containerd[1476]: time="2026-01-17T00:28:29.674150608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:29.677239 containerd[1476]: time="2026-01-17T00:28:29.677157145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.05094319s" Jan 17 00:28:29.677307 containerd[1476]: time="2026-01-17T00:28:29.677246134Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:28:29.679363 containerd[1476]: time="2026-01-17T00:28:29.679122888Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:28:30.329964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648689795.mount: Deactivated successfully. Jan 17 00:28:32.653320 containerd[1476]: time="2026-01-17T00:28:32.653133602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:32.658300 containerd[1476]: time="2026-01-17T00:28:32.658245224Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 17 00:28:32.662612 containerd[1476]: time="2026-01-17T00:28:32.661445874Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:32.677306 containerd[1476]: time="2026-01-17T00:28:32.676955432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:32.678827 containerd[1476]: time="2026-01-17T00:28:32.678485354Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.999319711s" Jan 17 00:28:32.678827 containerd[1476]: time="2026-01-17T00:28:32.678591987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:28:32.682875 containerd[1476]: time="2026-01-17T00:28:32.682646551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:28:33.296865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748159156.mount: Deactivated successfully. Jan 17 00:28:33.315025 containerd[1476]: time="2026-01-17T00:28:33.314704073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:33.318535 containerd[1476]: time="2026-01-17T00:28:33.318319130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 17 00:28:33.322395 containerd[1476]: time="2026-01-17T00:28:33.321830128Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:33.334563 containerd[1476]: time="2026-01-17T00:28:33.333593367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:33.339840 containerd[1476]: time="2026-01-17T00:28:33.339578776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 656.691199ms" Jan 17 00:28:33.339840 containerd[1476]: time="2026-01-17T00:28:33.339632673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:28:33.341060 containerd[1476]: time="2026-01-17T00:28:33.340897506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:28:34.016642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265653211.mount: Deactivated successfully. Jan 17 00:28:35.087628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:28:35.100288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:35.478130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:35.496362 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:35.784162 kubelet[1998]: E0117 00:28:35.783885 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:35.803927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:35.804310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:45.876874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:28:45.905540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:47.405004 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:47.407554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:48.415091 kubelet[2017]: E0117 00:28:48.414191 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:48.427666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:48.428997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:48.430108 systemd[1]: kubelet.service: Consumed 1.881s CPU time. Jan 17 00:28:50.593522 containerd[1476]: time="2026-01-17T00:28:50.593174566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:50.600848 containerd[1476]: time="2026-01-17T00:28:50.600056919Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 17 00:28:50.606326 containerd[1476]: time="2026-01-17T00:28:50.605156402Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:50.614854 containerd[1476]: time="2026-01-17T00:28:50.614644786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:50.616474 containerd[1476]: time="2026-01-17T00:28:50.616189017Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 17.275185642s" Jan 17 00:28:50.616474 containerd[1476]: time="2026-01-17T00:28:50.616458416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:28:58.591942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:28:58.623711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:59.533353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:59.538255 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:59.947458 kubelet[2059]: E0117 00:28:59.942334 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:29:00.004487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:29:00.005387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:29:00.759646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:00.800212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:00.956243 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-5.scope)... Jan 17 00:29:00.957290 systemd[1]: Reloading... Jan 17 00:29:01.210158 zram_generator::config[2113]: No configuration found. Jan 17 00:29:01.571816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:29:01.751331 systemd[1]: Reloading finished in 791 ms. Jan 17 00:29:01.960686 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:01.980180 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:29:01.980621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:02.011305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:02.722417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:02.727371 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:29:03.213308 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:29:03.213308 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:29:03.214497 kubelet[2163]: I0117 00:29:03.214132 2163 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:29:05.740406 kubelet[2163]: I0117 00:29:05.739188 2163 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:29:05.740406 kubelet[2163]: I0117 00:29:05.739272 2163 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:29:05.740406 kubelet[2163]: I0117 00:29:05.739991 2163 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:29:05.740406 kubelet[2163]: I0117 00:29:05.740124 2163 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:29:05.743248 kubelet[2163]: I0117 00:29:05.742286 2163 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:29:06.233155 kubelet[2163]: E0117 00:29:06.230585 2163 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:29:06.245846 kubelet[2163]: I0117 00:29:06.242409 2163 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:29:06.293310 kubelet[2163]: E0117 00:29:06.293072 2163 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:29:06.293310 kubelet[2163]: I0117 00:29:06.293276 2163 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:29:06.319174 kubelet[2163]: I0117 00:29:06.318589 2163 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:29:06.322438 kubelet[2163]: I0117 00:29:06.319937 2163 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:29:06.322438 kubelet[2163]: I0117 00:29:06.319975 2163 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:29:06.322438 kubelet[2163]: I0117 00:29:06.321662 2163 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:29:06.322438 kubelet[2163]: I0117 00:29:06.321694 2163 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:29:06.333925 kubelet[2163]: I0117 00:29:06.322270 2163 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:29:06.403000 kubelet[2163]: I0117 00:29:06.401462 2163 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:29:06.460124 kubelet[2163]: I0117 00:29:06.453434 2163 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:29:06.460124 kubelet[2163]: I0117 00:29:06.457454 2163 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:29:06.460124 kubelet[2163]: I0117 00:29:06.458547 2163 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:29:06.460124 kubelet[2163]: I0117 00:29:06.460267 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:29:06.477981 kubelet[2163]: E0117 00:29:06.477618 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:29:06.495464 kubelet[2163]: E0117 00:29:06.486909 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:29:06.526585 kubelet[2163]: I0117 00:29:06.526019 2163 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:29:06.536810 kubelet[2163]: I0117 00:29:06.534396 2163 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:29:06.536810 kubelet[2163]: I0117 00:29:06.534451 2163 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:29:06.539906 kubelet[2163]: W0117 00:29:06.537234 2163 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:29:06.646981 kubelet[2163]: I0117 00:29:06.640318 2163 server.go:1262] "Started kubelet" Jan 17 00:29:06.646981 kubelet[2163]: I0117 00:29:06.642094 2163 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:29:06.650856 kubelet[2163]: I0117 00:29:06.648931 2163 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:29:06.650856 kubelet[2163]: I0117 00:29:06.649971 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:29:06.659127 kubelet[2163]: I0117 00:29:06.653315 2163 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:29:06.659127 kubelet[2163]: I0117 00:29:06.653447 2163 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:29:06.663531 kubelet[2163]: I0117 00:29:06.660252 2163 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:29:06.673928 kubelet[2163]: I0117 00:29:06.671324 2163 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:29:06.673928 kubelet[2163]: E0117 00:29:06.672912 2163 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:29:06.679835 kubelet[2163]: I0117 00:29:06.677210 2163 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:29:06.679835 kubelet[2163]: I0117 00:29:06.671384 2163 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:29:06.679835 kubelet[2163]: I0117 00:29:06.679130 2163 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:29:06.680486 kubelet[2163]: E0117 00:29:06.670193 2163 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5d31aebe005c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:29:06.639519836 +0000 UTC m=+3.873144847,LastTimestamp:2026-01-17 00:29:06.639519836 +0000 UTC m=+3.873144847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:29:06.681281 kubelet[2163]: E0117 00:29:06.681231 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Jan 17 00:29:06.685865 kubelet[2163]: E0117 00:29:06.685348 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:29:06.687797 kubelet[2163]: I0117 00:29:06.687497 2163 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:29:06.688030 kubelet[2163]: I0117 00:29:06.687982 2163 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:29:06.693348 kubelet[2163]: I0117 00:29:06.693324 2163 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:29:06.711531 kubelet[2163]: E0117 00:29:06.711493 2163 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:29:06.775101 kubelet[2163]: E0117 00:29:06.774924 2163 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:29:06.922577 kubelet[2163]: E0117 00:29:06.914261 2163 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:29:06.930499 kubelet[2163]: E0117 00:29:06.927306 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Jan 17 00:29:06.981027 kubelet[2163]: I0117 00:29:06.980930 2163 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:29:06.981027 kubelet[2163]: I0117 00:29:06.980964 2163 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:29:06.981027 kubelet[2163]: I0117 00:29:06.981036 2163 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:29:06.991242 kubelet[2163]: I0117 00:29:06.989926 2163 policy_none.go:49] "None policy: Start" Jan 17 00:29:06.991242 kubelet[2163]: I0117 00:29:06.990096 2163 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:29:06.991242 kubelet[2163]: I0117 00:29:06.990187 2163 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:29:06.992131 kubelet[2163]: I0117 00:29:06.992063 2163 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:29:06.998419 kubelet[2163]: I0117 00:29:06.998281 2163 policy_none.go:47] "Start" Jan 17 00:29:06.999304 kubelet[2163]: I0117 00:29:06.999203 2163 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:29:06.999556 kubelet[2163]: I0117 00:29:06.999489 2163 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:29:06.999865 kubelet[2163]: I0117 00:29:06.999814 2163 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:29:07.000074 kubelet[2163]: E0117 00:29:06.999994 2163 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:29:07.003108 kubelet[2163]: E0117 00:29:07.002976 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:29:07.015254 kubelet[2163]: E0117 00:29:07.014891 2163 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:29:07.021538 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:29:07.176020 kubelet[2163]: E0117 00:29:07.173378 2163 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:29:07.180165 kubelet[2163]: E0117 00:29:07.179580 2163 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:29:07.207117 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:29:07.270139 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:29:07.277256 kubelet[2163]: E0117 00:29:07.274654 2163 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:29:07.277256 kubelet[2163]: I0117 00:29:07.275525 2163 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:29:07.277256 kubelet[2163]: I0117 00:29:07.275546 2163 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:29:07.283147 kubelet[2163]: E0117 00:29:07.283102 2163 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:29:07.285243 kubelet[2163]: I0117 00:29:07.284347 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:29:07.293427 kubelet[2163]: E0117 00:29:07.293357 2163 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:29:07.299060 kubelet[2163]: E0117 00:29:07.299016 2163 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:29:07.330137 kubelet[2163]: E0117 00:29:07.329687 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Jan 17 00:29:07.423671 kubelet[2163]: I0117 00:29:07.422300 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:07.427003 kubelet[2163]: E0117 00:29:07.424420 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 17 00:29:07.499014 kubelet[2163]: I0117 00:29:07.498581 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:07.499014 kubelet[2163]: I0117 00:29:07.498997 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:07.500059 kubelet[2163]: I0117 00:29:07.499947 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:07.500637 kubelet[2163]: I0117 00:29:07.500060 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:07.500637 kubelet[2163]: I0117 00:29:07.500353 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8925a11efe1850724b5640e797c5050-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8925a11efe1850724b5640e797c5050\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:07.500637 kubelet[2163]: I0117 00:29:07.500380 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8925a11efe1850724b5640e797c5050-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8925a11efe1850724b5640e797c5050\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:07.500400 systemd[1]: Created slice kubepods-burstable-poda8925a11efe1850724b5640e797c5050.slice - libcontainer container kubepods-burstable-poda8925a11efe1850724b5640e797c5050.slice. Jan 17 00:29:07.503078 kubelet[2163]: I0117 00:29:07.500434 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8925a11efe1850724b5640e797c5050-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8925a11efe1850724b5640e797c5050\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:07.503078 kubelet[2163]: I0117 00:29:07.503045 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:07.521958 kubelet[2163]: E0117 00:29:07.521553 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:07.533506 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 17 00:29:07.551840 kubelet[2163]: E0117 00:29:07.551562 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:07.564038 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 17 00:29:07.567400 kubelet[2163]: E0117 00:29:07.567314 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:07.622998 kubelet[2163]: I0117 00:29:07.622304 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:07.716328 kubelet[2163]: I0117 00:29:07.712217 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:07.716328 kubelet[2163]: E0117 00:29:07.714622 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 17 00:29:07.785134 kubelet[2163]: E0117 00:29:07.779183 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:29:07.898468 kubelet[2163]: E0117 00:29:07.896364 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:07.920398 containerd[1476]: time="2026-01-17T00:29:07.919633886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8925a11efe1850724b5640e797c5050,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:07.943083 kubelet[2163]: E0117 00:29:07.920442 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:07.948973 containerd[1476]: time="2026-01-17T00:29:07.947998040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:07.957040 kubelet[2163]: E0117 00:29:07.954637 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:07.957134 kubelet[2163]: E0117 00:29:07.957070 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:29:07.957687 containerd[1476]: time="2026-01-17T00:29:07.957337731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:07.987529 kubelet[2163]: E0117 00:29:07.986139 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:29:08.119910 kubelet[2163]: I0117 00:29:08.118491 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:08.119910 kubelet[2163]: E0117 00:29:08.119441 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 17 00:29:08.131552 kubelet[2163]: E0117 00:29:08.131191 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Jan 17 00:29:08.326496 kubelet[2163]: E0117 00:29:08.324128 2163 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:29:08.524330 kubelet[2163]: E0117 00:29:08.524214 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:29:08.832457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975336565.mount: Deactivated successfully. Jan 17 00:29:08.874889 containerd[1476]: time="2026-01-17T00:29:08.871643745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:29:08.887233 containerd[1476]: time="2026-01-17T00:29:08.885384473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:29:08.888874 containerd[1476]: time="2026-01-17T00:29:08.888448388Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:29:08.894558 containerd[1476]: time="2026-01-17T00:29:08.892458450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:29:08.899924 containerd[1476]: time="2026-01-17T00:29:08.899282331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:29:08.903200 containerd[1476]: time="2026-01-17T00:29:08.902309654Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:29:08.908300 containerd[1476]: time="2026-01-17T00:29:08.908230311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:29:08.918532 containerd[1476]: time="2026-01-17T00:29:08.915900918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:29:08.918532 containerd[1476]: time="2026-01-17T00:29:08.917828783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 997.458316ms" Jan 17 00:29:08.921902 containerd[1476]: time="2026-01-17T00:29:08.921868157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 973.617835ms" Jan 17 00:29:08.929536 kubelet[2163]: I0117 00:29:08.929264 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:08.931140 kubelet[2163]: E0117 00:29:08.930221 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 17 00:29:08.986693 containerd[1476]: time="2026-01-17T00:29:08.985618974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.028202587s" Jan 17 00:29:09.740217 kubelet[2163]: E0117 00:29:09.739328 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="3.2s" Jan 17 00:29:10.100633 containerd[1476]: time="2026-01-17T00:29:10.095448757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:10.100633 containerd[1476]: time="2026-01-17T00:29:10.097563826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:10.100633 containerd[1476]: time="2026-01-17T00:29:10.097609999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:10.100633 containerd[1476]: time="2026-01-17T00:29:10.097978232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:10.139062 containerd[1476]: time="2026-01-17T00:29:10.138377625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:10.139449 containerd[1476]: time="2026-01-17T00:29:10.139168751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:10.139449 containerd[1476]: time="2026-01-17T00:29:10.139261506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:10.598677 kubelet[2163]: E0117 00:29:10.595355 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:29:10.598677 kubelet[2163]: E0117 00:29:10.598155 2163 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5d31aebe005c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:29:06.639519836 +0000 UTC m=+3.873144847,LastTimestamp:2026-01-17 00:29:06.639519836 +0000 UTC m=+3.873144847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:29:10.605811 kubelet[2163]: I0117 00:29:10.604020 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:10.605811 kubelet[2163]: E0117 00:29:10.604980 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 17 00:29:10.606694 containerd[1476]: time="2026-01-17T00:29:10.155693349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:10.624340 kubelet[2163]: E0117 00:29:10.624112 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:29:10.641230 containerd[1476]: time="2026-01-17T00:29:10.639930723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:10.641230 containerd[1476]: time="2026-01-17T00:29:10.640532573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:10.641230 containerd[1476]: time="2026-01-17T00:29:10.640672262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:10.643126 containerd[1476]: time="2026-01-17T00:29:10.642882352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:11.045381 kubelet[2163]: E0117 00:29:11.043674 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:29:11.096374 systemd[1]: Started cri-containerd-eeb66ba6d2262010bdf17f03779cb8b41c3be605fd42385fca523d924010c959.scope - libcontainer container eeb66ba6d2262010bdf17f03779cb8b41c3be605fd42385fca523d924010c959. Jan 17 00:29:11.121143 systemd[1]: run-containerd-runc-k8s.io-3434b9449b7a93b07714db1dc069b84797b4a141c2cb7f86797ecce6aeb07a29-runc.KDSND1.mount: Deactivated successfully. Jan 17 00:29:11.137140 systemd[1]: Started cri-containerd-3434b9449b7a93b07714db1dc069b84797b4a141c2cb7f86797ecce6aeb07a29.scope - libcontainer container 3434b9449b7a93b07714db1dc069b84797b4a141c2cb7f86797ecce6aeb07a29. Jan 17 00:29:11.356939 kubelet[2163]: E0117 00:29:11.356526 2163 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:29:11.364153 systemd[1]: Started cri-containerd-45df78f7c09f222748e0fb80458fae025e6d428a88866a658467e83230de5755.scope - libcontainer container 45df78f7c09f222748e0fb80458fae025e6d428a88866a658467e83230de5755. Jan 17 00:29:12.128151 containerd[1476]: time="2026-01-17T00:29:12.127628302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3434b9449b7a93b07714db1dc069b84797b4a141c2cb7f86797ecce6aeb07a29\"" Jan 17 00:29:12.134885 kubelet[2163]: E0117 00:29:12.134540 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:12.143344 containerd[1476]: time="2026-01-17T00:29:12.143286297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeb66ba6d2262010bdf17f03779cb8b41c3be605fd42385fca523d924010c959\"" Jan 17 00:29:12.146429 kubelet[2163]: E0117 00:29:12.146188 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:12.188471 containerd[1476]: time="2026-01-17T00:29:12.188225842Z" level=info msg="CreateContainer within sandbox \"3434b9449b7a93b07714db1dc069b84797b4a141c2cb7f86797ecce6aeb07a29\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:29:12.191305 containerd[1476]: time="2026-01-17T00:29:12.191049934Z" level=info msg="CreateContainer within sandbox \"eeb66ba6d2262010bdf17f03779cb8b41c3be605fd42385fca523d924010c959\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:29:12.202675 containerd[1476]: time="2026-01-17T00:29:12.202559350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8925a11efe1850724b5640e797c5050,Namespace:kube-system,Attempt:0,} returns sandbox id \"45df78f7c09f222748e0fb80458fae025e6d428a88866a658467e83230de5755\"" Jan 17 00:29:12.211315 kubelet[2163]: E0117 00:29:12.210996 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:12.241604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791332785.mount: Deactivated successfully. Jan 17 00:29:12.243312 containerd[1476]: time="2026-01-17T00:29:12.242035776Z" level=info msg="CreateContainer within sandbox \"45df78f7c09f222748e0fb80458fae025e6d428a88866a658467e83230de5755\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:29:12.260263 containerd[1476]: time="2026-01-17T00:29:12.260204169Z" level=info msg="CreateContainer within sandbox \"3434b9449b7a93b07714db1dc069b84797b4a141c2cb7f86797ecce6aeb07a29\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b290f0053fb360cd9823504f3386c4a9f10ca93c22473736525fe531d61fa7e0\"" Jan 17 00:29:12.431263 containerd[1476]: time="2026-01-17T00:29:12.428121831Z" level=info msg="StartContainer for \"b290f0053fb360cd9823504f3386c4a9f10ca93c22473736525fe531d61fa7e0\"" Jan 17 00:29:12.429316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966792636.mount: Deactivated successfully. Jan 17 00:29:12.475218 containerd[1476]: time="2026-01-17T00:29:12.474058094Z" level=info msg="CreateContainer within sandbox \"eeb66ba6d2262010bdf17f03779cb8b41c3be605fd42385fca523d924010c959\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34be027a703b50bdf5ef72a3c9acbf84a65c48f4cf95a4b1722c9cf21a1041f9\"" Jan 17 00:29:12.479837 containerd[1476]: time="2026-01-17T00:29:12.478047416Z" level=info msg="StartContainer for \"34be027a703b50bdf5ef72a3c9acbf84a65c48f4cf95a4b1722c9cf21a1041f9\"" Jan 17 00:29:12.492780 containerd[1476]: time="2026-01-17T00:29:12.492067518Z" level=info msg="CreateContainer within sandbox \"45df78f7c09f222748e0fb80458fae025e6d428a88866a658467e83230de5755\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d5787f3e848689bfcfe86811526c7e6a0a1eecd3d0d26ff66e16a9726ce5139\"" Jan 17 00:29:12.496710 containerd[1476]: time="2026-01-17T00:29:12.495168612Z" level=info msg="StartContainer for \"6d5787f3e848689bfcfe86811526c7e6a0a1eecd3d0d26ff66e16a9726ce5139\"" Jan 17 00:29:12.580362 systemd[1]: Started cri-containerd-b290f0053fb360cd9823504f3386c4a9f10ca93c22473736525fe531d61fa7e0.scope - libcontainer container b290f0053fb360cd9823504f3386c4a9f10ca93c22473736525fe531d61fa7e0. Jan 17 00:29:12.611079 systemd[1]: Started cri-containerd-34be027a703b50bdf5ef72a3c9acbf84a65c48f4cf95a4b1722c9cf21a1041f9.scope - libcontainer container 34be027a703b50bdf5ef72a3c9acbf84a65c48f4cf95a4b1722c9cf21a1041f9. Jan 17 00:29:12.650102 systemd[1]: Started cri-containerd-6d5787f3e848689bfcfe86811526c7e6a0a1eecd3d0d26ff66e16a9726ce5139.scope - libcontainer container 6d5787f3e848689bfcfe86811526c7e6a0a1eecd3d0d26ff66e16a9726ce5139. Jan 17 00:29:12.732998 kubelet[2163]: E0117 00:29:12.728533 2163 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:29:12.941868 kubelet[2163]: E0117 00:29:12.940554 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="6.4s" Jan 17 00:29:12.979026 containerd[1476]: time="2026-01-17T00:29:12.978981501Z" level=info msg="StartContainer for \"6d5787f3e848689bfcfe86811526c7e6a0a1eecd3d0d26ff66e16a9726ce5139\" returns successfully" Jan 17 00:29:13.018671 containerd[1476]: time="2026-01-17T00:29:13.017380544Z" level=info msg="StartContainer for \"34be027a703b50bdf5ef72a3c9acbf84a65c48f4cf95a4b1722c9cf21a1041f9\" returns successfully" Jan 17 00:29:13.035461 containerd[1476]: time="2026-01-17T00:29:13.035236057Z" level=info msg="StartContainer for \"b290f0053fb360cd9823504f3386c4a9f10ca93c22473736525fe531d61fa7e0\" returns successfully" Jan 17 00:29:13.297979 kubelet[2163]: E0117 00:29:13.296715 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:13.297979 kubelet[2163]: E0117 00:29:13.297130 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:13.306486 kubelet[2163]: E0117 00:29:13.305463 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:13.306486 kubelet[2163]: E0117 00:29:13.305682 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:13.315398 kubelet[2163]: E0117 00:29:13.314635 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:13.316001 kubelet[2163]: E0117 00:29:13.315911 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:13.924209 kubelet[2163]: I0117 00:29:13.923133 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:14.452422 kubelet[2163]: E0117 00:29:14.451660 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:14.452422 kubelet[2163]: E0117 00:29:14.452117 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:14.465154 kubelet[2163]: E0117 00:29:14.455072 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:14.465154 kubelet[2163]: E0117 00:29:14.455269 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:14.465154 kubelet[2163]: E0117 00:29:14.463281 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:14.465154 kubelet[2163]: E0117 00:29:14.463443 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:15.485919 kubelet[2163]: E0117 00:29:15.485518 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:15.485919 kubelet[2163]: E0117 00:29:15.485856 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:15.488333 kubelet[2163]: E0117 00:29:15.487012 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:15.488333 kubelet[2163]: E0117 00:29:15.487128 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:16.499992 kubelet[2163]: E0117 00:29:16.499576 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:16.502022 kubelet[2163]: E0117 00:29:16.501592 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:17.299674 kubelet[2163]: E0117 00:29:17.299251 2163 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:29:17.864223 kubelet[2163]: E0117 00:29:17.861566 2163 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:29:17.864223 kubelet[2163]: E0117 00:29:17.861916 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:19.229412 kubelet[2163]: I0117 00:29:19.229108 2163 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:29:19.229412 kubelet[2163]: E0117 00:29:19.229164 2163 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 17 00:29:19.276241 kubelet[2163]: I0117 00:29:19.275060 2163 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:19.321694 kubelet[2163]: E0117 00:29:19.321533 2163 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:19.321694 kubelet[2163]: I0117 00:29:19.321585 2163 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:19.356585 kubelet[2163]: E0117 00:29:19.354639 2163 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:19.356585 kubelet[2163]: I0117 00:29:19.354685 2163 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:19.369911 kubelet[2163]: E0117 00:29:19.369870 2163 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:19.611154 kubelet[2163]: I0117 00:29:19.607680 2163 apiserver.go:52] "Watching apiserver" Jan 17 00:29:19.681910 kubelet[2163]: I0117 00:29:19.680088 2163 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:29:24.801887 kubelet[2163]: I0117 00:29:24.796022 2163 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:24.825964 kubelet[2163]: E0117 00:29:24.825858 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:24.923967 kubelet[2163]: E0117 00:29:24.923313 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:24.957864 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-5.scope)... Jan 17 00:29:24.957901 systemd[1]: Reloading... Jan 17 00:29:25.013542 kubelet[2163]: I0117 00:29:25.013022 2163 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:25.037314 kubelet[2163]: E0117 00:29:25.036877 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:25.099698 kubelet[2163]: I0117 00:29:25.097444 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.097373638 podStartE2EDuration="1.097373638s" podCreationTimestamp="2026-01-17 00:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:25.071537372 +0000 UTC m=+22.305162413" watchObservedRunningTime="2026-01-17 00:29:25.097373638 +0000 UTC m=+22.330998649" Jan 17 00:29:25.223507 zram_generator::config[2494]: No configuration found. Jan 17 00:29:25.937555 kubelet[2163]: E0117 00:29:25.937361 2163 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:26.053375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:29:26.451237 systemd[1]: Reloading finished in 1492 ms. Jan 17 00:29:26.628094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:26.655908 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:29:26.656969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:26.657575 systemd[1]: kubelet.service: Consumed 11.318s CPU time, 128.5M memory peak, 0B memory swap peak. Jan 17 00:29:26.685799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:27.287651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:27.301569 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:29:27.711884 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:29:27.711884 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:29:27.711884 kubelet[2539]: I0117 00:29:27.711541 2539 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:29:27.740686 kubelet[2539]: I0117 00:29:27.740580 2539 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:29:27.740686 kubelet[2539]: I0117 00:29:27.740667 2539 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:29:27.741224 kubelet[2539]: I0117 00:29:27.740794 2539 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:29:27.741224 kubelet[2539]: I0117 00:29:27.740810 2539 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:29:27.741224 kubelet[2539]: I0117 00:29:27.741190 2539 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:29:27.743640 kubelet[2539]: I0117 00:29:27.743576 2539 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:29:27.767411 kubelet[2539]: I0117 00:29:27.751917 2539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:29:27.779410 kubelet[2539]: E0117 00:29:27.779262 2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:29:27.779977 kubelet[2539]: I0117 00:29:27.779715 2539 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:29:27.794797 kubelet[2539]: I0117 00:29:27.791864 2539 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:29:27.794797 kubelet[2539]: I0117 00:29:27.792164 2539 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:29:27.794797 kubelet[2539]: I0117 00:29:27.792312 2539 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:29:27.794797 kubelet[2539]: I0117 00:29:27.792549 2539 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.792559 2539 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.792586 2539 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.793558 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.794032 2539 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.794067 2539 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.794105 2539 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:29:27.795602 kubelet[2539]: I0117 00:29:27.794400 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:29:27.798382 kubelet[2539]: I0117 00:29:27.798352 2539 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:29:27.800391 kubelet[2539]: I0117 00:29:27.800365 2539 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:29:27.800928 kubelet[2539]: I0117 00:29:27.800834 2539 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:29:27.817675 kubelet[2539]: I0117 00:29:27.817642 2539 server.go:1262] "Started kubelet" Jan 17 00:29:27.818176 kubelet[2539]: I0117 00:29:27.818141 2539 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:29:27.820528 kubelet[2539]: I0117 00:29:27.820397 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:29:27.820716 kubelet[2539]: I0117 00:29:27.820679 2539 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:29:27.820899 kubelet[2539]: I0117 00:29:27.820873 2539 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:29:27.821403 kubelet[2539]: I0117 00:29:27.821380 2539 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:29:27.823552 kubelet[2539]: I0117 00:29:27.823474 2539 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:29:27.827903 kubelet[2539]: I0117 00:29:27.827804 2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:29:27.829409 kubelet[2539]: I0117 00:29:27.829242 2539 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:29:27.830673 kubelet[2539]: I0117 00:29:27.830593 2539 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:29:27.831020 kubelet[2539]: I0117 00:29:27.830981 2539 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:29:27.836933 kubelet[2539]: I0117 00:29:27.836845 2539 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:29:27.839809 kubelet[2539]: I0117 00:29:27.839710 2539 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:29:27.840840 kubelet[2539]: E0117 00:29:27.840780 2539 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:29:27.845960 kubelet[2539]: I0117 00:29:27.845100 2539 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:29:28.021493 kubelet[2539]: I0117 00:29:28.020669 2539 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:29:28.034664 kubelet[2539]: I0117 00:29:28.032672 2539 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:29:28.034664 kubelet[2539]: I0117 00:29:28.032702 2539 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:29:28.034664 kubelet[2539]: I0117 00:29:28.032879 2539 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:29:28.034664 kubelet[2539]: E0117 00:29:28.032962 2539 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:29:28.134130 kubelet[2539]: E0117 00:29:28.133607 2539 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:29:28.144618 kubelet[2539]: I0117 00:29:28.144395 2539 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:29:28.145541 kubelet[2539]: I0117 00:29:28.144849 2539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:29:28.146480 kubelet[2539]: I0117 00:29:28.146046 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:29:28.146968 kubelet[2539]: I0117 00:29:28.146944 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:29:28.147846 kubelet[2539]: I0117 00:29:28.147058 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:29:28.147846 kubelet[2539]: I0117 00:29:28.147401 2539 policy_none.go:49] "None policy: Start" Jan 17 00:29:28.147846 kubelet[2539]: I0117 00:29:28.147426 2539 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:29:28.147846 kubelet[2539]: I0117 00:29:28.147449 2539 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:29:28.147846 kubelet[2539]: I0117 00:29:28.147661 2539 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:29:28.147846 kubelet[2539]: I0117 00:29:28.147676 2539 policy_none.go:47] "Start" Jan 17 00:29:28.191459 kubelet[2539]: E0117 00:29:28.190672 2539 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:29:28.192706 kubelet[2539]: I0117 00:29:28.192677 2539 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:29:28.193835 kubelet[2539]: I0117 00:29:28.193570 2539 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:29:28.194450 kubelet[2539]: I0117 00:29:28.194236 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:29:28.198543 kubelet[2539]: E0117 00:29:28.198389 2539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:29:28.345054 kubelet[2539]: I0117 00:29:28.339672 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:28.345054 kubelet[2539]: I0117 00:29:28.343858 2539 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:29:28.345054 kubelet[2539]: I0117 00:29:28.344865 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:28.346880 kubelet[2539]: I0117 00:29:28.346858 2539 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:28.441045 kubelet[2539]: E0117 00:29:28.439005 2539 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:28.441045 kubelet[2539]: E0117 00:29:28.439860 2539 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:28.475577 kubelet[2539]: I0117 00:29:28.475498 2539 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:29:28.477172 kubelet[2539]: I0117 00:29:28.477076 2539 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:29:28.575925 kubelet[2539]: I0117 00:29:28.574831 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:29:28.575925 kubelet[2539]: I0117 00:29:28.575090 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8925a11efe1850724b5640e797c5050-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8925a11efe1850724b5640e797c5050\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:28.575925 kubelet[2539]: I0117 00:29:28.575169 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8925a11efe1850724b5640e797c5050-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8925a11efe1850724b5640e797c5050\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:28.575925 kubelet[2539]: I0117 00:29:28.575345 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:28.575925 kubelet[2539]: I0117 00:29:28.575470 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:28.581921 kubelet[2539]: I0117 00:29:28.575557 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:28.581921 kubelet[2539]: I0117 00:29:28.575584 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8925a11efe1850724b5640e797c5050-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8925a11efe1850724b5640e797c5050\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:29:28.581921 kubelet[2539]: I0117 00:29:28.575608 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:28.581921 kubelet[2539]: I0117 00:29:28.577479 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:29:28.745007 kubelet[2539]: E0117 00:29:28.742139 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:28.745007 kubelet[2539]: E0117 00:29:28.743697 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:28.745007 kubelet[2539]: E0117 00:29:28.744022 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:28.796481 kubelet[2539]: I0117 00:29:28.796368 2539 apiserver.go:52] "Watching apiserver" Jan 17 00:29:28.836471 kubelet[2539]: I0117 00:29:28.834687 2539 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:29:29.386897 kubelet[2539]: E0117 00:29:29.386460 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:29.394662 kubelet[2539]: E0117 00:29:29.393121 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:29.394662 kubelet[2539]: E0117 00:29:29.387714 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:30.013550 kubelet[2539]: I0117 00:29:30.012311 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.012287044 podStartE2EDuration="2.012287044s" podCreationTimestamp="2026-01-17 00:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:30.010310749 +0000 UTC m=+2.672350552" watchObservedRunningTime="2026-01-17 00:29:30.012287044 +0000 UTC m=+2.674326817" Jan 17 00:29:30.393530 kubelet[2539]: E0117 00:29:30.393229 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:30.398830 kubelet[2539]: E0117 00:29:30.398667 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:31.091715 kubelet[2539]: I0117 00:29:31.088206 2539 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:29:31.114292 containerd[1476]: time="2026-01-17T00:29:31.109182540Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:29:31.116572 kubelet[2539]: I0117 00:29:31.115510 2539 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:29:31.279520 kubelet[2539]: I0117 00:29:31.278887 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d856b265-71a9-4f72-a237-8ea6740df5ab-xtables-lock\") pod \"kube-proxy-svj6l\" (UID: \"d856b265-71a9-4f72-a237-8ea6740df5ab\") " pod="kube-system/kube-proxy-svj6l" Jan 17 00:29:31.279520 kubelet[2539]: I0117 00:29:31.278956 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d856b265-71a9-4f72-a237-8ea6740df5ab-lib-modules\") pod \"kube-proxy-svj6l\" (UID: \"d856b265-71a9-4f72-a237-8ea6740df5ab\") " pod="kube-system/kube-proxy-svj6l" Jan 17 00:29:31.279520 kubelet[2539]: I0117 00:29:31.278989 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d856b265-71a9-4f72-a237-8ea6740df5ab-kube-proxy\") pod \"kube-proxy-svj6l\" (UID: \"d856b265-71a9-4f72-a237-8ea6740df5ab\") " pod="kube-system/kube-proxy-svj6l" Jan 17 00:29:31.279520 kubelet[2539]: I0117 00:29:31.279013 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9stm\" (UniqueName: \"kubernetes.io/projected/d856b265-71a9-4f72-a237-8ea6740df5ab-kube-api-access-x9stm\") pod \"kube-proxy-svj6l\" (UID: \"d856b265-71a9-4f72-a237-8ea6740df5ab\") " pod="kube-system/kube-proxy-svj6l" Jan 17 00:29:31.308865 systemd[1]: Created slice kubepods-besteffort-podd856b265_71a9_4f72_a237_8ea6740df5ab.slice - libcontainer container kubepods-besteffort-podd856b265_71a9_4f72_a237_8ea6740df5ab.slice. Jan 17 00:29:31.373987 kubelet[2539]: E0117 00:29:31.373665 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:31.401262 kubelet[2539]: E0117 00:29:31.398527 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:31.401262 kubelet[2539]: E0117 00:29:31.399950 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:31.401262 kubelet[2539]: E0117 00:29:31.400389 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:31.722944 kubelet[2539]: E0117 00:29:31.718479 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:31.733471 containerd[1476]: time="2026-01-17T00:29:31.733412396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svj6l,Uid:d856b265-71a9-4f72-a237-8ea6740df5ab,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:31.907146 containerd[1476]: time="2026-01-17T00:29:31.905932535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:31.907146 containerd[1476]: time="2026-01-17T00:29:31.906496654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:31.907146 containerd[1476]: time="2026-01-17T00:29:31.906541155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:31.907146 containerd[1476]: time="2026-01-17T00:29:31.906714641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.067304 systemd[1]: Started cri-containerd-5c1ece469cf6a27ff3e10468246aa17805e7427ed2d08717eee88fdbfe953451.scope - libcontainer container 5c1ece469cf6a27ff3e10468246aa17805e7427ed2d08717eee88fdbfe953451. Jan 17 00:29:32.203242 containerd[1476]: time="2026-01-17T00:29:32.201591793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svj6l,Uid:d856b265-71a9-4f72-a237-8ea6740df5ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c1ece469cf6a27ff3e10468246aa17805e7427ed2d08717eee88fdbfe953451\"" Jan 17 00:29:32.206097 kubelet[2539]: E0117 00:29:32.203703 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:32.233697 containerd[1476]: time="2026-01-17T00:29:32.233636325Z" level=info msg="CreateContainer within sandbox \"5c1ece469cf6a27ff3e10468246aa17805e7427ed2d08717eee88fdbfe953451\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:29:32.306883 containerd[1476]: time="2026-01-17T00:29:32.306364211Z" level=info msg="CreateContainer within sandbox \"5c1ece469cf6a27ff3e10468246aa17805e7427ed2d08717eee88fdbfe953451\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"df18121b41889a2d232274865e646bff56325f267ab87b74b1588fb7e255b9f2\"" Jan 17 00:29:32.311537 containerd[1476]: time="2026-01-17T00:29:32.309559073Z" level=info msg="StartContainer for \"df18121b41889a2d232274865e646bff56325f267ab87b74b1588fb7e255b9f2\"" Jan 17 00:29:32.436283 kubelet[2539]: E0117 00:29:32.436150 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:32.436965 kubelet[2539]: E0117 00:29:32.436929 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:32.503208 systemd[1]: Started cri-containerd-df18121b41889a2d232274865e646bff56325f267ab87b74b1588fb7e255b9f2.scope - libcontainer container df18121b41889a2d232274865e646bff56325f267ab87b74b1588fb7e255b9f2. Jan 17 00:29:32.577578 systemd[1]: Created slice kubepods-burstable-pod41a16e13_780f_42fa_8c98_29744ff4c0c0.slice - libcontainer container kubepods-burstable-pod41a16e13_780f_42fa_8c98_29744ff4c0c0.slice. Jan 17 00:29:32.611383 kubelet[2539]: I0117 00:29:32.611213 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/41a16e13-780f-42fa-8c98-29744ff4c0c0-run\") pod \"kube-flannel-ds-9pxkt\" (UID: \"41a16e13-780f-42fa-8c98-29744ff4c0c0\") " pod="kube-flannel/kube-flannel-ds-9pxkt" Jan 17 00:29:32.611383 kubelet[2539]: I0117 00:29:32.611276 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/41a16e13-780f-42fa-8c98-29744ff4c0c0-cni-plugin\") pod \"kube-flannel-ds-9pxkt\" (UID: \"41a16e13-780f-42fa-8c98-29744ff4c0c0\") " pod="kube-flannel/kube-flannel-ds-9pxkt" Jan 17 00:29:32.611383 kubelet[2539]: I0117 00:29:32.611315 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/41a16e13-780f-42fa-8c98-29744ff4c0c0-flannel-cfg\") pod \"kube-flannel-ds-9pxkt\" (UID: \"41a16e13-780f-42fa-8c98-29744ff4c0c0\") " pod="kube-flannel/kube-flannel-ds-9pxkt" Jan 17 00:29:32.611383 kubelet[2539]: I0117 00:29:32.611343 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/41a16e13-780f-42fa-8c98-29744ff4c0c0-cni\") pod \"kube-flannel-ds-9pxkt\" (UID: \"41a16e13-780f-42fa-8c98-29744ff4c0c0\") " pod="kube-flannel/kube-flannel-ds-9pxkt" Jan 17 00:29:32.611383 kubelet[2539]: I0117 00:29:32.611367 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhqn\" (UniqueName: \"kubernetes.io/projected/41a16e13-780f-42fa-8c98-29744ff4c0c0-kube-api-access-6qhqn\") pod \"kube-flannel-ds-9pxkt\" (UID: \"41a16e13-780f-42fa-8c98-29744ff4c0c0\") " pod="kube-flannel/kube-flannel-ds-9pxkt" Jan 17 00:29:32.611829 kubelet[2539]: I0117 00:29:32.611398 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41a16e13-780f-42fa-8c98-29744ff4c0c0-xtables-lock\") pod \"kube-flannel-ds-9pxkt\" (UID: \"41a16e13-780f-42fa-8c98-29744ff4c0c0\") " pod="kube-flannel/kube-flannel-ds-9pxkt" Jan 17 00:29:32.629959 containerd[1476]: time="2026-01-17T00:29:32.629795277Z" level=info msg="StartContainer for \"df18121b41889a2d232274865e646bff56325f267ab87b74b1588fb7e255b9f2\" returns successfully" Jan 17 00:29:32.948129 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 17 00:29:32.977611 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:32.993214 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:43452.service: Deactivated successfully. Jan 17 00:29:32.998947 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:29:32.999967 systemd[1]: session-5.scope: Consumed 19.528s CPU time, 165.3M memory peak, 0B memory swap peak. Jan 17 00:29:33.006537 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:29:33.012303 systemd-logind[1444]: Removed session 5. Jan 17 00:29:33.290343 kubelet[2539]: E0117 00:29:33.287509 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:33.291512 containerd[1476]: time="2026-01-17T00:29:33.289058558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9pxkt,Uid:41a16e13-780f-42fa-8c98-29744ff4c0c0,Namespace:kube-flannel,Attempt:0,}" Jan 17 00:29:33.427949 containerd[1476]: time="2026-01-17T00:29:33.427334154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:33.427949 containerd[1476]: time="2026-01-17T00:29:33.427431072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:33.427949 containerd[1476]: time="2026-01-17T00:29:33.427448864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:33.427949 containerd[1476]: time="2026-01-17T00:29:33.427582719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:33.464130 kubelet[2539]: E0117 00:29:33.462818 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:33.464130 kubelet[2539]: E0117 00:29:33.463218 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:33.603665 systemd[1]: run-containerd-runc-k8s.io-a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7-runc.7Tb3Qm.mount: Deactivated successfully. Jan 17 00:29:33.679460 systemd[1]: Started cri-containerd-a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7.scope - libcontainer container a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7. Jan 17 00:29:33.906920 containerd[1476]: time="2026-01-17T00:29:33.905676836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9pxkt,Uid:41a16e13-780f-42fa-8c98-29744ff4c0c0,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\"" Jan 17 00:29:33.908037 kubelet[2539]: E0117 00:29:33.907958 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:33.912286 containerd[1476]: time="2026-01-17T00:29:33.912211254Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 17 00:29:34.480230 kubelet[2539]: E0117 00:29:34.480191 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:35.110512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851974884.mount: Deactivated successfully. Jan 17 00:29:35.252086 containerd[1476]: time="2026-01-17T00:29:35.251920490Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:35.254662 containerd[1476]: time="2026-01-17T00:29:35.254505611Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 17 00:29:35.256973 containerd[1476]: time="2026-01-17T00:29:35.256575095Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:35.263456 containerd[1476]: time="2026-01-17T00:29:35.262124737Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:35.264139 containerd[1476]: time="2026-01-17T00:29:35.264099626Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.351801422s" Jan 17 00:29:35.264200 containerd[1476]: time="2026-01-17T00:29:35.264139579Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 17 00:29:35.275430 containerd[1476]: time="2026-01-17T00:29:35.275329519Z" level=info msg="CreateContainer within sandbox \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 00:29:35.304501 containerd[1476]: time="2026-01-17T00:29:35.304339462Z" level=info msg="CreateContainer within sandbox \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2\"" Jan 17 00:29:35.305669 containerd[1476]: time="2026-01-17T00:29:35.305571218Z" level=info msg="StartContainer for \"7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2\"" Jan 17 00:29:35.413437 systemd[1]: Started cri-containerd-7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2.scope - libcontainer container 7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2. Jan 17 00:29:35.471530 containerd[1476]: time="2026-01-17T00:29:35.471419226Z" level=info msg="StartContainer for \"7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2\" returns successfully" Jan 17 00:29:35.475602 systemd[1]: cri-containerd-7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2.scope: Deactivated successfully. Jan 17 00:29:35.487485 kubelet[2539]: E0117 00:29:35.487008 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:35.514369 kubelet[2539]: I0117 00:29:35.514278 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-svj6l" podStartSLOduration=4.514255179 podStartE2EDuration="4.514255179s" podCreationTimestamp="2026-01-17 00:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:33.603273261 +0000 UTC m=+6.265313065" watchObservedRunningTime="2026-01-17 00:29:35.514255179 +0000 UTC m=+8.176294962" Jan 17 00:29:35.575194 containerd[1476]: time="2026-01-17T00:29:35.574880442Z" level=info msg="shim disconnected" id=7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2 namespace=k8s.io Jan 17 00:29:35.575194 containerd[1476]: time="2026-01-17T00:29:35.575040616Z" level=warning msg="cleaning up after shim disconnected" id=7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2 namespace=k8s.io Jan 17 00:29:35.575194 containerd[1476]: time="2026-01-17T00:29:35.575055173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:35.794224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7daee5eff4606a733d80a995aae3f8a5fe8ff27830c85bf6ae8343a3b28714e2-rootfs.mount: Deactivated successfully. Jan 17 00:29:36.502959 kubelet[2539]: E0117 00:29:36.502226 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:36.507553 containerd[1476]: time="2026-01-17T00:29:36.507016252Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 17 00:29:41.597922 containerd[1476]: time="2026-01-17T00:29:41.597562304Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:41.604965 containerd[1476]: time="2026-01-17T00:29:41.603182942Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 17 00:29:41.608290 containerd[1476]: time="2026-01-17T00:29:41.608078963Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:41.623414 containerd[1476]: time="2026-01-17T00:29:41.623058165Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:41.631040 containerd[1476]: time="2026-01-17T00:29:41.628343091Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 5.121279812s" Jan 17 00:29:41.631040 containerd[1476]: time="2026-01-17T00:29:41.629708667Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 17 00:29:41.642638 containerd[1476]: time="2026-01-17T00:29:41.642433019Z" level=info msg="CreateContainer within sandbox \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:29:41.704117 containerd[1476]: time="2026-01-17T00:29:41.703999116Z" level=info msg="CreateContainer within sandbox \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250\"" Jan 17 00:29:41.709116 containerd[1476]: time="2026-01-17T00:29:41.708711529Z" level=info msg="StartContainer for \"58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250\"" Jan 17 00:29:41.811710 systemd[1]: Started cri-containerd-58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250.scope - libcontainer container 58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250. Jan 17 00:29:41.949650 containerd[1476]: time="2026-01-17T00:29:41.944589441Z" level=info msg="StartContainer for \"58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250\" returns successfully" Jan 17 00:29:41.948552 systemd[1]: cri-containerd-58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250.scope: Deactivated successfully. Jan 17 00:29:42.011367 kubelet[2539]: I0117 00:29:42.009605 2539 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:29:42.077602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250-rootfs.mount: Deactivated successfully. Jan 17 00:29:42.232107 systemd[1]: Created slice kubepods-burstable-pod1525b353_e51b_4385_8c29_f721d66502b0.slice - libcontainer container kubepods-burstable-pod1525b353_e51b_4385_8c29_f721d66502b0.slice. Jan 17 00:29:42.297566 kubelet[2539]: I0117 00:29:42.295414 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1525b353-e51b-4385-8c29-f721d66502b0-config-volume\") pod \"coredns-66bc5c9577-fhjd7\" (UID: \"1525b353-e51b-4385-8c29-f721d66502b0\") " pod="kube-system/coredns-66bc5c9577-fhjd7" Jan 17 00:29:42.297566 kubelet[2539]: I0117 00:29:42.295478 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqgc\" (UniqueName: \"kubernetes.io/projected/1525b353-e51b-4385-8c29-f721d66502b0-kube-api-access-jlqgc\") pod \"coredns-66bc5c9577-fhjd7\" (UID: \"1525b353-e51b-4385-8c29-f721d66502b0\") " pod="kube-system/coredns-66bc5c9577-fhjd7" Jan 17 00:29:42.338591 systemd[1]: Created slice kubepods-burstable-pod83d8c844_cba0_497d_94a6_b19e998d5f13.slice - libcontainer container kubepods-burstable-pod83d8c844_cba0_497d_94a6_b19e998d5f13.slice. Jan 17 00:29:42.373388 containerd[1476]: time="2026-01-17T00:29:42.372404814Z" level=info msg="shim disconnected" id=58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250 namespace=k8s.io Jan 17 00:29:42.373388 containerd[1476]: time="2026-01-17T00:29:42.372521120Z" level=warning msg="cleaning up after shim disconnected" id=58af8d908afaf6702d96c66411b31e9234b9a4db27b10e96d25e882961f3d250 namespace=k8s.io Jan 17 00:29:42.373388 containerd[1476]: time="2026-01-17T00:29:42.372537902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:42.398593 kubelet[2539]: I0117 00:29:42.397485 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9sz\" (UniqueName: \"kubernetes.io/projected/83d8c844-cba0-497d-94a6-b19e998d5f13-kube-api-access-dc9sz\") pod \"coredns-66bc5c9577-nt5c6\" (UID: \"83d8c844-cba0-497d-94a6-b19e998d5f13\") " pod="kube-system/coredns-66bc5c9577-nt5c6" Jan 17 00:29:42.398593 kubelet[2539]: I0117 00:29:42.397562 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83d8c844-cba0-497d-94a6-b19e998d5f13-config-volume\") pod \"coredns-66bc5c9577-nt5c6\" (UID: \"83d8c844-cba0-497d-94a6-b19e998d5f13\") " pod="kube-system/coredns-66bc5c9577-nt5c6" Jan 17 00:29:42.429838 containerd[1476]: time="2026-01-17T00:29:42.429364516Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:29:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:29:42.551291 kubelet[2539]: E0117 00:29:42.548375 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:42.551446 containerd[1476]: time="2026-01-17T00:29:42.549521819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhjd7,Uid:1525b353-e51b-4385-8c29-f721d66502b0,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:42.572615 kubelet[2539]: E0117 00:29:42.572162 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:42.591436 containerd[1476]: time="2026-01-17T00:29:42.591002037Z" level=info msg="CreateContainer within sandbox \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 00:29:42.670124 kubelet[2539]: E0117 00:29:42.668502 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:42.670415 containerd[1476]: time="2026-01-17T00:29:42.669579791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nt5c6,Uid:83d8c844-cba0-497d-94a6-b19e998d5f13,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:42.682882 containerd[1476]: time="2026-01-17T00:29:42.682833161Z" level=info msg="CreateContainer within sandbox \"a62d498f229edc1fb5822ab5837b6043d82127d1cd305fe96018b32abb3d8cc7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"299aa050689e857544e5d5e77a98c6f590868706003926201935910a2a7ee812\"" Jan 17 00:29:42.690584 containerd[1476]: time="2026-01-17T00:29:42.685333716Z" level=info msg="StartContainer for \"299aa050689e857544e5d5e77a98c6f590868706003926201935910a2a7ee812\"" Jan 17 00:29:42.768531 systemd[1]: run-netns-cni\x2dec2a367f\x2d7e69\x2d5bbe\x2dd930\x2df2543049ce32.mount: Deactivated successfully. Jan 17 00:29:42.782679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0715ecc6b288131490e128c7bdf44278ea2be8b2013029e37193e0b54609f2a-shm.mount: Deactivated successfully. Jan 17 00:29:42.792424 containerd[1476]: time="2026-01-17T00:29:42.792017780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhjd7,Uid:1525b353-e51b-4385-8c29-f721d66502b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0715ecc6b288131490e128c7bdf44278ea2be8b2013029e37193e0b54609f2a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:29:42.799243 kubelet[2539]: E0117 00:29:42.792695 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0715ecc6b288131490e128c7bdf44278ea2be8b2013029e37193e0b54609f2a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:29:42.799243 kubelet[2539]: E0117 00:29:42.793017 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0715ecc6b288131490e128c7bdf44278ea2be8b2013029e37193e0b54609f2a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-fhjd7" Jan 17 00:29:42.799243 kubelet[2539]: E0117 00:29:42.793044 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0715ecc6b288131490e128c7bdf44278ea2be8b2013029e37193e0b54609f2a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-fhjd7" Jan 17 00:29:42.799243 kubelet[2539]: E0117 00:29:42.793106 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fhjd7_kube-system(1525b353-e51b-4385-8c29-f721d66502b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fhjd7_kube-system(1525b353-e51b-4385-8c29-f721d66502b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0715ecc6b288131490e128c7bdf44278ea2be8b2013029e37193e0b54609f2a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-fhjd7" podUID="1525b353-e51b-4385-8c29-f721d66502b0" Jan 17 00:29:42.840289 systemd[1]: Started cri-containerd-299aa050689e857544e5d5e77a98c6f590868706003926201935910a2a7ee812.scope - libcontainer container 299aa050689e857544e5d5e77a98c6f590868706003926201935910a2a7ee812. Jan 17 00:29:42.871969 containerd[1476]: time="2026-01-17T00:29:42.869901336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nt5c6,Uid:83d8c844-cba0-497d-94a6-b19e998d5f13,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94bb6f204c97a1e6d9b7be17d5ce0f6af8c4016cb5ffabe9447bdb13bc1b650b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:29:42.872208 kubelet[2539]: E0117 00:29:42.870512 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94bb6f204c97a1e6d9b7be17d5ce0f6af8c4016cb5ffabe9447bdb13bc1b650b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:29:42.872208 kubelet[2539]: E0117 00:29:42.870588 2539 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94bb6f204c97a1e6d9b7be17d5ce0f6af8c4016cb5ffabe9447bdb13bc1b650b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-nt5c6" Jan 17 00:29:42.872208 kubelet[2539]: E0117 00:29:42.870616 2539 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94bb6f204c97a1e6d9b7be17d5ce0f6af8c4016cb5ffabe9447bdb13bc1b650b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-nt5c6" Jan 17 00:29:42.872208 kubelet[2539]: E0117 00:29:42.870668 2539 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nt5c6_kube-system(83d8c844-cba0-497d-94a6-b19e998d5f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nt5c6_kube-system(83d8c844-cba0-497d-94a6-b19e998d5f13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94bb6f204c97a1e6d9b7be17d5ce0f6af8c4016cb5ffabe9447bdb13bc1b650b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-nt5c6" podUID="83d8c844-cba0-497d-94a6-b19e998d5f13" Jan 17 00:29:42.955289 containerd[1476]: time="2026-01-17T00:29:42.952384947Z" level=info msg="StartContainer for \"299aa050689e857544e5d5e77a98c6f590868706003926201935910a2a7ee812\" returns successfully" Jan 17 00:29:43.603575 kubelet[2539]: E0117 00:29:43.603060 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:43.693250 systemd[1]: run-netns-cni\x2db6cef7b9\x2deb3e\x2d6328\x2de618\x2dc61d0ef0741f.mount: Deactivated successfully. Jan 17 00:29:43.693569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94bb6f204c97a1e6d9b7be17d5ce0f6af8c4016cb5ffabe9447bdb13bc1b650b-shm.mount: Deactivated successfully. Jan 17 00:29:44.177182 systemd-networkd[1383]: flannel.1: Link UP Jan 17 00:29:44.177195 systemd-networkd[1383]: flannel.1: Gained carrier Jan 17 00:29:44.608971 kubelet[2539]: E0117 00:29:44.608643 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:45.613584 systemd-networkd[1383]: flannel.1: Gained IPv6LL Jan 17 00:29:54.056050 kubelet[2539]: E0117 00:29:54.054810 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:54.057023 containerd[1476]: time="2026-01-17T00:29:54.055446216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhjd7,Uid:1525b353-e51b-4385-8c29-f721d66502b0,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:54.170900 systemd-networkd[1383]: cni0: Link UP Jan 17 00:29:54.170919 systemd-networkd[1383]: cni0: Gained carrier Jan 17 00:29:54.178556 systemd-networkd[1383]: cni0: Lost carrier Jan 17 00:29:54.213422 systemd-networkd[1383]: veth182f09ef: Link UP Jan 17 00:29:54.219034 kernel: cni0: port 1(veth182f09ef) entered blocking state Jan 17 00:29:54.219142 kernel: cni0: port 1(veth182f09ef) entered disabled state Jan 17 00:29:54.219187 kernel: veth182f09ef: entered allmulticast mode Jan 17 00:29:54.229938 kernel: veth182f09ef: entered promiscuous mode Jan 17 00:29:54.230036 kernel: cni0: port 1(veth182f09ef) entered blocking state Jan 17 00:29:54.230070 kernel: cni0: port 1(veth182f09ef) entered forwarding state Jan 17 00:29:54.234070 kernel: cni0: port 1(veth182f09ef) entered disabled state Jan 17 00:29:54.264997 kernel: cni0: port 1(veth182f09ef) entered blocking state Jan 17 00:29:54.265140 kernel: cni0: port 1(veth182f09ef) entered forwarding state Jan 17 00:29:54.265284 systemd-networkd[1383]: veth182f09ef: Gained carrier Jan 17 00:29:54.266269 systemd-networkd[1383]: cni0: Gained carrier Jan 17 00:29:54.274664 containerd[1476]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000104950), "name":"cbr0", "type":"bridge"} Jan 17 00:29:54.274664 containerd[1476]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:29:54.355812 containerd[1476]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-17T00:29:54.355249976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:54.355812 containerd[1476]: time="2026-01-17T00:29:54.355429996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:54.355812 containerd[1476]: time="2026-01-17T00:29:54.355492343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:54.367033 containerd[1476]: time="2026-01-17T00:29:54.361412371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:54.436410 systemd[1]: Started cri-containerd-da7805640795f133c723e266962c804bc2bb6974b7346457ae75d951685e6440.scope - libcontainer container da7805640795f133c723e266962c804bc2bb6974b7346457ae75d951685e6440. Jan 17 00:29:54.474370 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:29:54.552964 containerd[1476]: time="2026-01-17T00:29:54.552903297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhjd7,Uid:1525b353-e51b-4385-8c29-f721d66502b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"da7805640795f133c723e266962c804bc2bb6974b7346457ae75d951685e6440\"" Jan 17 00:29:54.554663 kubelet[2539]: E0117 00:29:54.554490 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:54.569326 containerd[1476]: time="2026-01-17T00:29:54.569230455Z" level=info msg="CreateContainer within sandbox \"da7805640795f133c723e266962c804bc2bb6974b7346457ae75d951685e6440\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:29:54.618602 containerd[1476]: time="2026-01-17T00:29:54.616706396Z" level=info msg="CreateContainer within sandbox \"da7805640795f133c723e266962c804bc2bb6974b7346457ae75d951685e6440\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4962f68dfeab4294822505169da3a222ba6f83b1a2589fb228ccaca7e7ddba47\"" Jan 17 00:29:54.619694 containerd[1476]: time="2026-01-17T00:29:54.619517954Z" level=info msg="StartContainer for \"4962f68dfeab4294822505169da3a222ba6f83b1a2589fb228ccaca7e7ddba47\"" Jan 17 00:29:54.705049 systemd[1]: Started cri-containerd-4962f68dfeab4294822505169da3a222ba6f83b1a2589fb228ccaca7e7ddba47.scope - libcontainer container 4962f68dfeab4294822505169da3a222ba6f83b1a2589fb228ccaca7e7ddba47. Jan 17 00:29:54.796853 containerd[1476]: time="2026-01-17T00:29:54.796337847Z" level=info msg="StartContainer for \"4962f68dfeab4294822505169da3a222ba6f83b1a2589fb228ccaca7e7ddba47\" returns successfully" Jan 17 00:29:55.060661 kubelet[2539]: E0117 00:29:55.059448 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:55.061642 containerd[1476]: time="2026-01-17T00:29:55.061572910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nt5c6,Uid:83d8c844-cba0-497d-94a6-b19e998d5f13,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:55.125584 systemd-networkd[1383]: veth0dc4fe97: Link UP Jan 17 00:29:55.139156 kernel: cni0: port 2(veth0dc4fe97) entered blocking state Jan 17 00:29:55.139248 kernel: cni0: port 2(veth0dc4fe97) entered disabled state Jan 17 00:29:55.142571 kernel: veth0dc4fe97: entered allmulticast mode Jan 17 00:29:55.146243 kernel: veth0dc4fe97: entered promiscuous mode Jan 17 00:29:55.151086 kernel: cni0: port 2(veth0dc4fe97) entered blocking state Jan 17 00:29:55.151160 kernel: cni0: port 2(veth0dc4fe97) entered forwarding state Jan 17 00:29:55.168267 systemd-networkd[1383]: veth0dc4fe97: Gained carrier Jan 17 00:29:55.172536 containerd[1476]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 17 00:29:55.172536 containerd[1476]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:29:55.266013 containerd[1476]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-17T00:29:55.265090566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:55.266013 containerd[1476]: time="2026-01-17T00:29:55.265167150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:55.266013 containerd[1476]: time="2026-01-17T00:29:55.265199451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:55.268170 containerd[1476]: time="2026-01-17T00:29:55.266496133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:55.322616 systemd[1]: Started cri-containerd-ae18d5eccf7b1c53ef66a36acb0605fbf1b96c2c9415346d9a786e151849307a.scope - libcontainer container ae18d5eccf7b1c53ef66a36acb0605fbf1b96c2c9415346d9a786e151849307a. Jan 17 00:29:55.375960 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:29:55.455269 containerd[1476]: time="2026-01-17T00:29:55.455146834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nt5c6,Uid:83d8c844-cba0-497d-94a6-b19e998d5f13,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae18d5eccf7b1c53ef66a36acb0605fbf1b96c2c9415346d9a786e151849307a\"" Jan 17 00:29:55.461456 kubelet[2539]: E0117 00:29:55.460622 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:55.470608 containerd[1476]: time="2026-01-17T00:29:55.468554804Z" level=info msg="CreateContainer within sandbox \"ae18d5eccf7b1c53ef66a36acb0605fbf1b96c2c9415346d9a786e151849307a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:29:55.504693 containerd[1476]: time="2026-01-17T00:29:55.504330700Z" level=info msg="CreateContainer within sandbox \"ae18d5eccf7b1c53ef66a36acb0605fbf1b96c2c9415346d9a786e151849307a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"347f8af0d9ef98ee19f3f4259922d4956e475fe500563574e346c187e4b89994\"" Jan 17 00:29:55.506259 containerd[1476]: time="2026-01-17T00:29:55.506151022Z" level=info msg="StartContainer for \"347f8af0d9ef98ee19f3f4259922d4956e475fe500563574e346c187e4b89994\"" Jan 17 00:29:55.575531 systemd[1]: Started cri-containerd-347f8af0d9ef98ee19f3f4259922d4956e475fe500563574e346c187e4b89994.scope - libcontainer container 347f8af0d9ef98ee19f3f4259922d4956e475fe500563574e346c187e4b89994. Jan 17 00:29:55.644979 containerd[1476]: time="2026-01-17T00:29:55.644641509Z" level=info msg="StartContainer for \"347f8af0d9ef98ee19f3f4259922d4956e475fe500563574e346c187e4b89994\" returns successfully" Jan 17 00:29:55.678433 kubelet[2539]: E0117 00:29:55.678324 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:55.685814 kubelet[2539]: E0117 00:29:55.685549 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:55.722790 kubelet[2539]: I0117 00:29:55.722106 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9pxkt" podStartSLOduration=15.999097875 podStartE2EDuration="23.72208325s" podCreationTimestamp="2026-01-17 00:29:32 +0000 UTC" firstStartedPulling="2026-01-17 00:29:33.911415006 +0000 UTC m=+6.573454789" lastFinishedPulling="2026-01-17 00:29:41.634400391 +0000 UTC m=+14.296440164" observedRunningTime="2026-01-17 00:29:43.635653746 +0000 UTC m=+16.297693519" watchObservedRunningTime="2026-01-17 00:29:55.72208325 +0000 UTC m=+28.384123033" Jan 17 00:29:55.722790 kubelet[2539]: I0117 00:29:55.722347 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fhjd7" podStartSLOduration=24.722337059 podStartE2EDuration="24.722337059s" podCreationTimestamp="2026-01-17 00:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:55.716539102 +0000 UTC m=+28.378578885" watchObservedRunningTime="2026-01-17 00:29:55.722337059 +0000 UTC m=+28.384376832" Jan 17 00:29:55.800192 kubelet[2539]: I0117 00:29:55.799964 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nt5c6" podStartSLOduration=24.79994354 podStartE2EDuration="24.79994354s" podCreationTimestamp="2026-01-17 00:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:55.799631908 +0000 UTC m=+28.461671702" watchObservedRunningTime="2026-01-17 00:29:55.79994354 +0000 UTC m=+28.461983333" Jan 17 00:29:55.852310 systemd-networkd[1383]: veth182f09ef: Gained IPv6LL Jan 17 00:29:56.108980 systemd-networkd[1383]: cni0: Gained IPv6LL Jan 17 00:29:56.694310 kubelet[2539]: E0117 00:29:56.694118 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:56.695683 kubelet[2539]: E0117 00:29:56.695423 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:57.069598 systemd-networkd[1383]: veth0dc4fe97: Gained IPv6LL Jan 17 00:29:57.698636 kubelet[2539]: E0117 00:29:57.698411 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:57.700828 kubelet[2539]: E0117 00:29:57.699168 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:30.022615 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:32932.service - OpenSSH per-connection server daemon (10.0.0.1:32932). Jan 17 00:30:30.157312 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 32932 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:30.164245 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:30.195189 systemd-logind[1444]: New session 6 of user core. Jan 17 00:30:30.209203 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:30:30.509371 sshd[3604]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:30.517404 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:32932.service: Deactivated successfully. Jan 17 00:30:30.522206 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:30:30.524682 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:30:30.537826 systemd-logind[1444]: Removed session 6. Jan 17 00:30:35.578931 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:51716.service - OpenSSH per-connection server daemon (10.0.0.1:51716). Jan 17 00:30:35.666096 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 51716 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:35.669641 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:35.687624 systemd-logind[1444]: New session 7 of user core. Jan 17 00:30:35.703699 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:30:36.023277 sshd[3658]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:36.035149 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:51716.service: Deactivated successfully. Jan 17 00:30:36.042560 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:30:36.048414 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:30:36.059364 systemd-logind[1444]: Removed session 7. Jan 17 00:30:41.197681 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:51722.service - OpenSSH per-connection server daemon (10.0.0.1:51722). Jan 17 00:30:41.837946 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 51722 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:41.870410 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:41.900232 systemd-logind[1444]: New session 8 of user core. Jan 17 00:30:41.916862 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:30:42.080653 kubelet[2539]: E0117 00:30:42.075488 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:42.120152 kubelet[2539]: E0117 00:30:42.082504 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:42.861506 sshd[3693]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:42.878679 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:51722.service: Deactivated successfully. Jan 17 00:30:42.888547 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:30:42.896599 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:30:42.901387 systemd-logind[1444]: Removed session 8. Jan 17 00:30:47.932177 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:37276.service - OpenSSH per-connection server daemon (10.0.0.1:37276). Jan 17 00:30:48.145281 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 37276 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:48.152447 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:48.177409 systemd-logind[1444]: New session 9 of user core. Jan 17 00:30:48.197479 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:30:48.554864 sshd[3729]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:48.582165 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:37276.service: Deactivated successfully. Jan 17 00:30:48.586291 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:30:48.589313 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:30:48.618361 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:37292.service - OpenSSH per-connection server daemon (10.0.0.1:37292). Jan 17 00:30:48.627260 systemd-logind[1444]: Removed session 9. Jan 17 00:30:48.689057 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 37292 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:48.691601 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:48.705268 systemd-logind[1444]: New session 10 of user core. Jan 17 00:30:48.714396 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:30:49.037655 kubelet[2539]: E0117 00:30:49.037538 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:49.278245 sshd[3744]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:49.319113 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:37292.service: Deactivated successfully. Jan 17 00:30:49.332701 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:30:49.342090 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:30:49.474934 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:37306.service - OpenSSH per-connection server daemon (10.0.0.1:37306). Jan 17 00:30:49.537083 systemd-logind[1444]: Removed session 10. Jan 17 00:30:49.638222 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 37306 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:49.639503 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:49.666365 systemd-logind[1444]: New session 11 of user core. Jan 17 00:30:49.685340 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:30:50.390108 sshd[3756]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:50.416365 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:37306.service: Deactivated successfully. Jan 17 00:30:50.423179 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:30:50.425303 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:30:50.435026 systemd-logind[1444]: Removed session 11. Jan 17 00:30:55.455674 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:46760.service - OpenSSH per-connection server daemon (10.0.0.1:46760). Jan 17 00:30:55.554344 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 46760 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:55.565088 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:55.608122 systemd-logind[1444]: New session 12 of user core. Jan 17 00:30:55.616934 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:30:56.082928 sshd[3798]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:56.097997 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:46760.service: Deactivated successfully. Jan 17 00:30:56.110517 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:30:56.114949 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:30:56.119473 systemd-logind[1444]: Removed session 12. Jan 17 00:30:59.039859 kubelet[2539]: E0117 00:30:59.035380 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:59.039859 kubelet[2539]: E0117 00:30:59.038145 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:31:01.131312 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:46762.service - OpenSSH per-connection server daemon (10.0.0.1:46762). Jan 17 00:31:01.284259 sshd[3832]: Accepted publickey for core from 10.0.0.1 port 46762 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:01.288503 sshd[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:01.320845 systemd-logind[1444]: New session 13 of user core. Jan 17 00:31:01.335910 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:31:01.676214 sshd[3832]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:01.700640 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:46762.service: Deactivated successfully. Jan 17 00:31:01.704109 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:31:01.714534 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:31:01.727633 systemd-logind[1444]: Removed session 13. Jan 17 00:31:06.764129 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:38886.service - OpenSSH per-connection server daemon (10.0.0.1:38886). Jan 17 00:31:06.864164 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 38886 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:06.870017 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:06.897230 systemd-logind[1444]: New session 14 of user core. Jan 17 00:31:06.912626 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:31:07.285904 sshd[3869]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:07.300382 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:38886.service: Deactivated successfully. Jan 17 00:31:07.305650 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:31:07.312641 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:31:07.317524 systemd-logind[1444]: Removed session 14. Jan 17 00:31:08.040670 kubelet[2539]: E0117 00:31:08.039526 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:31:12.325547 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:38900.service - OpenSSH per-connection server daemon (10.0.0.1:38900). Jan 17 00:31:12.429603 sshd[3903]: Accepted publickey for core from 10.0.0.1 port 38900 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:12.433695 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:12.448980 systemd-logind[1444]: New session 15 of user core. Jan 17 00:31:12.466424 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:31:12.872192 sshd[3903]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:12.883696 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:38900.service: Deactivated successfully. Jan 17 00:31:12.884236 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:31:12.893074 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:31:12.905342 systemd-logind[1444]: Removed session 15. Jan 17 00:31:17.910550 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:35142.service - OpenSSH per-connection server daemon (10.0.0.1:35142). Jan 17 00:31:17.971291 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 35142 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:17.975217 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:17.989026 systemd-logind[1444]: New session 16 of user core. Jan 17 00:31:18.000194 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:31:18.253257 sshd[3938]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:18.282885 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:35142.service: Deactivated successfully. Jan 17 00:31:18.292137 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:31:18.304287 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:31:18.316468 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:35148.service - OpenSSH per-connection server daemon (10.0.0.1:35148). Jan 17 00:31:18.318511 systemd-logind[1444]: Removed session 16. Jan 17 00:31:18.388880 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 35148 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:18.391893 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:18.404097 systemd-logind[1444]: New session 17 of user core. Jan 17 00:31:18.414126 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:31:19.085492 sshd[3966]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:19.099252 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:35148.service: Deactivated successfully. Jan 17 00:31:19.103474 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:31:19.109152 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:31:19.118660 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:35152.service - OpenSSH per-connection server daemon (10.0.0.1:35152). Jan 17 00:31:19.120813 systemd-logind[1444]: Removed session 17. Jan 17 00:31:19.175572 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 35152 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:19.178613 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:19.192379 systemd-logind[1444]: New session 18 of user core. Jan 17 00:31:19.203533 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:31:20.573018 sshd[3978]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:20.602644 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:35152.service: Deactivated successfully. Jan 17 00:31:20.608630 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:31:20.609219 systemd[1]: session-18.scope: Consumed 1.092s CPU time. Jan 17 00:31:20.619956 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:31:20.633448 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:35166.service - OpenSSH per-connection server daemon (10.0.0.1:35166). Jan 17 00:31:20.645303 systemd-logind[1444]: Removed session 18. Jan 17 00:31:20.730334 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 35166 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:20.733262 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:20.756227 systemd-logind[1444]: New session 19 of user core. Jan 17 00:31:20.784353 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:31:21.576167 sshd[4000]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:21.607080 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:35166.service: Deactivated successfully. Jan 17 00:31:21.620635 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:31:21.622149 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:31:21.653059 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:35180.service - OpenSSH per-connection server daemon (10.0.0.1:35180). Jan 17 00:31:21.658623 systemd-logind[1444]: Removed session 19. Jan 17 00:31:21.785896 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 35180 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:21.788276 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:21.812908 systemd-logind[1444]: New session 20 of user core. Jan 17 00:31:21.828368 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:31:22.043843 kubelet[2539]: E0117 00:31:22.043558 2539 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:31:22.178556 sshd[4020]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:22.192441 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:35180.service: Deactivated successfully. Jan 17 00:31:22.201551 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:31:22.205384 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:31:22.209856 systemd-logind[1444]: Removed session 20. Jan 17 00:31:27.242268 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:49400.service - OpenSSH per-connection server daemon (10.0.0.1:49400). Jan 17 00:31:27.421255 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 49400 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:27.428437 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:27.455010 systemd-logind[1444]: New session 21 of user core. Jan 17 00:31:27.487490 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:31:27.808281 sshd[4054]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:27.814965 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:49400.service: Deactivated successfully. Jan 17 00:31:27.818459 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:31:27.823288 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:31:27.826504 systemd-logind[1444]: Removed session 21. Jan 17 00:31:32.842688 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:47508.service - OpenSSH per-connection server daemon (10.0.0.1:47508). Jan 17 00:31:32.949245 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 47508 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:32.953979 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:32.985573 systemd-logind[1444]: New session 22 of user core. Jan 17 00:31:32.998274 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:31:33.293110 sshd[4090]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:33.300227 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:47508.service: Deactivated successfully. Jan 17 00:31:33.305830 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:31:33.309914 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:31:33.313888 systemd-logind[1444]: Removed session 22. Jan 17 00:31:38.314904 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:47522.service - OpenSSH per-connection server daemon (10.0.0.1:47522). Jan 17 00:31:38.415434 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 47522 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:38.418149 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:38.426089 systemd-logind[1444]: New session 23 of user core. Jan 17 00:31:38.443485 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:31:38.681403 sshd[4126]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:38.690852 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:47522.service: Deactivated successfully. Jan 17 00:31:38.698191 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:31:38.705232 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:31:38.710353 systemd-logind[1444]: Removed session 23. Jan 17 00:31:43.736102 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:40178.service - OpenSSH per-connection server daemon (10.0.0.1:40178). Jan 17 00:31:43.909517 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 40178 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:43.914152 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:43.938604 systemd-logind[1444]: New session 24 of user core. Jan 17 00:31:43.956456 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:31:44.287896 sshd[4177]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:44.310297 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:40178.service: Deactivated successfully. Jan 17 00:31:44.315234 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:31:44.326303 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:31:44.336946 systemd-logind[1444]: Removed session 24. Jan 17 00:31:49.377172 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). Jan 17 00:31:49.496912 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:49.502127 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:49.529428 systemd-logind[1444]: New session 25 of user core. Jan 17 00:31:49.539013 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:31:49.981137 sshd[4211]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:49.996909 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:40190.service: Deactivated successfully. Jan 17 00:31:50.028366 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:31:50.036991 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:31:50.068106 systemd-logind[1444]: Removed session 25.