Jan 17 00:14:23.870879 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:14:23.870918 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:23.870939 kernel: BIOS-provided physical RAM map: Jan 17 00:14:23.870947 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:14:23.870954 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:14:23.870963 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:14:23.870976 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:14:23.870984 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:14:23.870991 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:14:23.871001 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:14:23.871016 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:14:23.871024 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:14:23.871032 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:14:23.871042 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:14:23.871055 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:14:23.871063 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:14:23.871077 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:14:23.871089 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:14:23.871100 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:14:23.871108 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:14:23.871117 kernel: NX (Execute Disable) protection: active Jan 17 00:14:23.871129 kernel: APIC: Static calls initialized Jan 17 00:14:23.871139 kernel: efi: EFI v2.7 by EDK II Jan 17 00:14:23.871148 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:14:23.871156 kernel: SMBIOS 2.8 present. Jan 17 00:14:23.871169 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:14:23.871229 kernel: Hypervisor detected: KVM Jan 17 00:14:23.871245 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:14:23.871257 kernel: kvm-clock: using sched offset of 18965184108 cycles Jan 17 00:14:23.871266 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:14:23.871275 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:14:23.871284 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:14:23.871296 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:14:23.871306 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:14:23.871315 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:14:23.871324 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:14:23.871342 kernel: Using GB pages for direct mapping Jan 17 00:14:23.871350 kernel: Secure boot disabled Jan 17 00:14:23.871359 kernel: ACPI: Early table checksum verification disabled Jan 17 00:14:23.871369 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:14:23.871387 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:14:23.871397 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:14:23.871407 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:14:23.871424 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:14:23.871434 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:14:23.871443 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:14:23.871456 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:14:23.871467 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:14:23.871476 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:14:23.871486 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:14:23.871579 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:14:23.871591 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:14:23.871603 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:14:23.871613 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:14:23.871622 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:14:23.871630 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:14:23.871643 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:14:23.871654 kernel: No NUMA configuration found Jan 17 00:14:23.871663 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:14:23.871679 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:14:23.871691 kernel: Zone ranges: Jan 17 00:14:23.871701 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:14:23.871710 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:14:23.871720 kernel: Normal empty Jan 17 00:14:23.871733 kernel: Movable zone start for each node Jan 17 00:14:23.871743 kernel: Early memory node ranges Jan 17 00:14:23.871752 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:14:23.871762 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:14:23.871775 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:14:23.871789 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:14:23.871800 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:14:23.871812 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:14:23.871821 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:14:23.871830 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:14:23.871841 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:14:23.871853 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:14:23.871862 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:14:23.871871 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:14:23.871889 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:14:23.871898 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:14:23.871908 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:14:23.871918 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:14:23.871931 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:14:23.871940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:14:23.871949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:14:23.871961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:14:23.871973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:14:23.871989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:14:23.871998 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:14:23.872009 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:14:23.872021 kernel: TSC deadline timer available Jan 17 00:14:23.872030 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:14:23.872039 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:14:23.872052 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:14:23.872062 kernel: kvm-guest: setup PV sched yield Jan 17 00:14:23.872071 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:14:23.872088 kernel: Booting paravirtualized kernel on KVM Jan 17 00:14:23.872099 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:14:23.872108 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:14:23.872118 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:14:23.872131 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:14:23.872141 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:14:23.872150 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:14:23.872160 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:14:23.872174 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:23.873015 kernel: random: crng init done Jan 17 00:14:23.873029 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:14:23.873039 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:14:23.873048 kernel: Fallback order for Node 0: 0 Jan 17 00:14:23.873057 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:14:23.873067 kernel: Policy zone: DMA32 Jan 17 00:14:23.873078 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:14:23.873090 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:14:23.873109 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:14:23.873118 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:14:23.873127 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:14:23.873136 kernel: Dynamic Preempt: voluntary Jan 17 00:14:23.873149 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:14:23.873174 kernel: rcu: RCU event tracing is enabled. Jan 17 00:14:23.873788 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:14:23.873800 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:14:23.873813 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:14:23.873825 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:14:23.873835 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:14:23.873845 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:14:23.873863 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:14:23.873875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:14:23.873885 kernel: Console: colour dummy device 80x25 Jan 17 00:14:23.873894 kernel: printk: console [ttyS0] enabled Jan 17 00:14:23.873906 kernel: ACPI: Core revision 20230628 Jan 17 00:14:23.873923 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:14:23.873933 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:14:23.873942 kernel: x2apic enabled Jan 17 00:14:23.873956 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:14:23.873967 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:14:23.873977 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:14:23.873988 kernel: kvm-guest: setup PV IPIs Jan 17 00:14:23.874000 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:14:23.874012 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:14:23.874026 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:14:23.874038 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:14:23.874051 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:14:23.874061 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:14:23.874071 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:14:23.874082 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:14:23.874095 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:14:23.874105 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:14:23.874114 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:14:23.874132 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:14:23.874145 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:14:23.874154 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:14:23.874165 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:14:23.874225 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:14:23.874238 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:14:23.874250 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:14:23.874262 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:14:23.874277 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:14:23.874287 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:14:23.874299 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:14:23.874311 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:14:23.874321 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:14:23.874331 kernel: landlock: Up and running. Jan 17 00:14:23.874342 kernel: SELinux: Initializing. Jan 17 00:14:23.874356 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:14:23.874365 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:14:23.874380 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:14:23.874393 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:14:23.874405 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:14:23.874415 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:14:23.874425 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:14:23.874438 kernel: signal: max sigframe size: 1776 Jan 17 00:14:23.874450 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:14:23.874460 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:14:23.874470 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:14:23.874487 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:14:23.874499 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:14:23.874606 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:14:23.874616 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:14:23.874626 kernel: smpboot: Max logical packages: 1 Jan 17 00:14:23.874636 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:14:23.874649 kernel: devtmpfs: initialized Jan 17 00:14:23.874659 kernel: x86/mm: Memory block size: 128MB Jan 17 00:14:23.874669 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:14:23.874688 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:14:23.874700 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:14:23.874709 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:14:23.874719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:14:23.874732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:14:23.874745 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:14:23.874755 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:14:23.874764 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:14:23.874775 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:14:23.874794 kernel: audit: type=2000 audit(1768608858.406:1): state=initialized audit_enabled=0 res=1 Jan 17 00:14:23.874804 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:14:23.874813 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:14:23.874825 kernel: cpuidle: using governor menu Jan 17 00:14:23.874837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:14:23.874847 kernel: dca service started, version 1.12.1 Jan 17 00:14:23.874856 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:14:23.874868 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:14:23.874881 kernel: PCI: Using configuration type 1 for base access Jan 17 00:14:23.874896 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:14:23.874906 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:14:23.874918 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:14:23.874930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:14:23.874940 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:14:23.874950 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:14:23.874962 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:14:23.874974 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:14:23.874984 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:14:23.875002 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:14:23.875015 kernel: ACPI: Interpreter enabled Jan 17 00:14:23.875025 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:14:23.875034 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:14:23.875045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:14:23.875058 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:14:23.875068 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:14:23.875077 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:14:23.876859 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:14:23.877115 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:14:23.877363 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:14:23.877381 kernel: PCI host bridge to bus 0000:00 Jan 17 00:14:23.878297 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:14:23.878480 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:14:23.879019 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:14:23.879257 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:14:23.879425 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:14:23.879673 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:14:23.879842 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:14:23.880406 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:14:23.880735 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:14:23.880925 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:14:23.881106 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:14:23.881344 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:14:23.881642 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:14:23.881831 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:14:23.882110 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:14:23.882357 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:14:23.882666 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:14:23.882906 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:14:23.883241 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:14:23.883435 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:14:23.883717 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:14:23.883908 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:14:23.884250 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:14:23.884447 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:14:23.885732 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:14:23.885928 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:14:23.886111 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:14:23.886458 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:14:23.886765 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:14:23.887098 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:14:23.887361 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:14:23.887653 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:14:23.887916 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:14:23.888105 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:14:23.888123 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:14:23.888136 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:14:23.888147 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:14:23.888165 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:14:23.888233 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:14:23.888246 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:14:23.888259 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:14:23.888271 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:14:23.888281 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:14:23.888290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:14:23.888300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:14:23.888312 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:14:23.888329 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:14:23.888339 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:14:23.888349 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:14:23.888362 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:14:23.888373 kernel: iommu: Default domain type: Translated Jan 17 00:14:23.888383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:14:23.888393 kernel: efivars: Registered efivars operations Jan 17 00:14:23.888406 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:14:23.888417 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:14:23.888431 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:14:23.888443 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:14:23.888455 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:14:23.888465 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:14:23.888782 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:14:23.888966 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:14:23.889142 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:14:23.889161 kernel: vgaarb: loaded Jan 17 00:14:23.889172 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:14:23.889256 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:14:23.889267 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:14:23.889279 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:14:23.889291 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:14:23.889303 kernel: pnp: PnP ACPI init Jan 17 00:14:23.890251 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:14:23.890273 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:14:23.890284 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:14:23.890306 kernel: NET: Registered PF_INET protocol family Jan 17 00:14:23.890316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:14:23.890326 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:14:23.890336 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:14:23.890348 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:14:23.890360 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:14:23.890370 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:14:23.890379 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:14:23.890391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:14:23.890408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:14:23.890419 kernel: NET: Registered PF_XDP protocol family Jan 17 00:14:23.890779 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:14:23.890967 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:14:23.891166 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:14:23.895033 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:14:23.895275 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:14:23.895446 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:14:23.895722 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:14:23.895897 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:14:23.895914 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:14:23.895928 kernel: Initialise system trusted keyrings Jan 17 00:14:23.895938 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:14:23.895948 kernel: Key type asymmetric registered Jan 17 00:14:23.895962 kernel: Asymmetric key parser 'x509' registered Jan 17 00:14:23.895972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:14:23.895989 kernel: io scheduler mq-deadline registered Jan 17 00:14:23.896002 kernel: io scheduler kyber registered Jan 17 00:14:23.896013 kernel: io scheduler bfq registered Jan 17 00:14:23.896022 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:14:23.896035 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:14:23.896047 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:14:23.896057 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:14:23.896067 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:14:23.896080 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:14:23.896090 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:14:23.896106 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:14:23.896119 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:14:23.896600 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:14:23.896619 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:14:23.897172 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:14:23.897411 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:14:22 UTC (1768608862) Jan 17 00:14:23.897667 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:14:23.897691 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:14:23.897705 kernel: efifb: probing for efifb Jan 17 00:14:23.897716 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:14:23.897726 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:14:23.897738 kernel: efifb: scrolling: redraw Jan 17 00:14:23.897750 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:14:23.897760 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:14:23.897770 kernel: fb0: EFI VGA frame buffer device Jan 17 00:14:23.897781 kernel: pstore: Using crash dump compression: deflate Jan 17 00:14:23.897798 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:14:23.897808 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:14:23.897820 kernel: Segment Routing with IPv6 Jan 17 00:14:23.897833 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:14:23.897843 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:14:23.897852 kernel: Key type dns_resolver registered Jan 17 00:14:23.897865 kernel: IPI shorthand broadcast: enabled Jan 17 00:14:23.897903 kernel: sched_clock: Marking stable (4714025961, 635083943)->(6029265932, -680156028) Jan 17 00:14:23.897919 kernel: registered taskstats version 1 Jan 17 00:14:23.897929 kernel: Loading compiled-in X.509 certificates Jan 17 00:14:23.897947 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:14:23.897958 kernel: Key type .fscrypt registered Jan 17 00:14:23.897968 kernel: Key type fscrypt-provisioning registered Jan 17 00:14:23.897980 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:14:23.897993 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:14:23.898003 kernel: ima: No architecture policies found Jan 17 00:14:23.898014 kernel: clk: Disabling unused clocks Jan 17 00:14:23.898027 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:14:23.898041 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:14:23.898053 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:14:23.898066 kernel: Run /init as init process Jan 17 00:14:23.898076 kernel: with arguments: Jan 17 00:14:23.898087 kernel: /init Jan 17 00:14:23.898101 kernel: with environment: Jan 17 00:14:23.898111 kernel: HOME=/ Jan 17 00:14:23.898121 kernel: TERM=linux Jan 17 00:14:23.898138 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:14:23.898157 systemd[1]: Detected virtualization kvm. Jan 17 00:14:23.898171 systemd[1]: Detected architecture x86-64. Jan 17 00:14:23.898243 systemd[1]: Running in initrd. Jan 17 00:14:23.898254 systemd[1]: No hostname configured, using default hostname. Jan 17 00:14:23.898266 systemd[1]: Hostname set to . Jan 17 00:14:23.898280 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:14:23.898291 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:14:23.898308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:23.898322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:23.898334 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:14:23.898345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:14:23.898360 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:14:23.898379 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:14:23.898394 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:14:23.898406 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:14:23.898417 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:23.898429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:23.898442 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:14:23.898455 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:14:23.898474 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:14:23.898485 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:14:23.898496 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:14:23.898652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:14:23.898666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:14:23.898678 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:14:23.898692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:23.898704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:23.898721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:23.898735 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:14:23.898748 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:14:23.898759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:14:23.898771 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:14:23.898785 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:14:23.898798 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:14:23.898809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:14:23.898902 systemd-journald[193]: Collecting audit messages is disabled. Jan 17 00:14:23.898939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:23.898953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:14:23.898964 systemd-journald[193]: Journal started Jan 17 00:14:23.898995 systemd-journald[193]: Runtime Journal (/run/log/journal/2dada9fa3a644c169e8366e46af23dc9) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:14:23.914091 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:14:23.929650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:23.944460 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:14:24.019754 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 00:14:24.028452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:14:24.039851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:14:24.051909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:24.093704 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:24.105138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:24.159717 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:14:24.163033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:24.200238 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:24.220280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:14:24.243665 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:24.260454 dracut-cmdline[225]: dracut-dracut-053 Jan 17 00:14:24.271129 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:24.299589 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:14:24.307667 kernel: Bridge firewalling registered Jan 17 00:14:24.307474 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 00:14:24.310405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:24.357453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:14:24.401717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:24.432776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:14:24.514330 kernel: SCSI subsystem initialized Jan 17 00:14:24.526963 systemd-resolved[306]: Positive Trust Anchors: Jan 17 00:14:24.527041 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:14:24.527082 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:14:24.537791 systemd-resolved[306]: Defaulting to hostname 'linux'. Jan 17 00:14:24.540663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:14:24.630252 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:14:24.611145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:24.675861 kernel: iscsi: registered transport (tcp) Jan 17 00:14:24.726763 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:14:24.727100 kernel: QLogic iSCSI HBA Driver Jan 17 00:14:24.930453 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:14:24.949252 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:14:25.070123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:14:25.070272 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:14:25.070293 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:14:25.276681 kernel: raid6: avx2x4 gen() 17034 MB/s Jan 17 00:14:25.291633 kernel: raid6: avx2x2 gen() 13545 MB/s Jan 17 00:14:25.315836 kernel: raid6: avx2x1 gen() 12388 MB/s Jan 17 00:14:25.316282 kernel: raid6: using algorithm avx2x4 gen() 17034 MB/s Jan 17 00:14:25.337394 kernel: raid6: .... xor() 4229 MB/s, rmw enabled Jan 17 00:14:25.337830 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:14:25.389237 kernel: xor: automatically using best checksumming function avx Jan 17 00:14:26.000870 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:14:26.027907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:14:26.052843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:26.088914 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 17 00:14:26.097925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:26.132860 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:14:26.155901 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 17 00:14:26.254126 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:14:26.294385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:14:26.545077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:26.596754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:14:26.682139 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:14:26.697026 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:14:26.711867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:26.712043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:14:26.739712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:14:26.784799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:14:26.805381 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:14:26.784952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:26.802743 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:26.814041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:26.824843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:26.880090 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:26.942329 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:14:26.947936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:27.021694 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:14:27.022283 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:14:27.022304 kernel: GPT:9289727 != 19775487 Jan 17 00:14:27.022321 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:14:27.022338 kernel: GPT:9289727 != 19775487 Jan 17 00:14:27.022354 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:14:27.022370 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:14:26.993916 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:14:27.054388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:27.054759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:27.130869 kernel: libata version 3.00 loaded. Jan 17 00:14:27.134072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:27.185430 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:14:27.196233 kernel: AES CTR mode by8 optimization enabled Jan 17 00:14:27.208672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:27.246936 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:14:27.253689 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:14:27.266906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:27.302362 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Jan 17 00:14:27.302401 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:14:27.302773 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (471) Jan 17 00:14:27.302791 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:14:27.315679 kernel: scsi host0: ahci Jan 17 00:14:27.322867 kernel: scsi host1: ahci Jan 17 00:14:27.326802 kernel: scsi host2: ahci Jan 17 00:14:27.327045 kernel: scsi host3: ahci Jan 17 00:14:27.336826 kernel: scsi host4: ahci Jan 17 00:14:27.337280 kernel: scsi host5: ahci Jan 17 00:14:27.341677 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:14:27.357410 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:14:27.378859 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:14:27.378891 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:14:27.378915 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:14:27.378931 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:14:27.378946 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:14:27.412895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:27.449958 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:14:27.462787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:14:27.503099 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:14:27.532721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:14:27.569758 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:14:27.600640 disk-uuid[582]: Primary Header is updated. Jan 17 00:14:27.600640 disk-uuid[582]: Secondary Entries is updated. Jan 17 00:14:27.600640 disk-uuid[582]: Secondary Header is updated. Jan 17 00:14:27.627966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:14:27.637637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:14:27.683242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:14:27.711589 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:14:27.711668 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:14:27.731367 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:14:27.741735 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:14:27.752857 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:14:27.769000 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:14:27.769068 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:14:27.781959 kernel: ata3.00: applying bridge limits Jan 17 00:14:27.795340 kernel: ata3.00: configured for UDMA/100 Jan 17 00:14:27.795393 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:14:27.908925 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:14:27.910791 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:14:27.931668 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:14:28.677623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:14:28.681082 disk-uuid[583]: The operation has completed successfully. Jan 17 00:14:28.785894 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:14:28.786127 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:14:28.839105 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:14:28.863038 sh[603]: Success Jan 17 00:14:28.934480 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:14:29.089742 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:14:29.121458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:14:29.148080 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:14:29.212474 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:14:29.212609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:29.212638 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:14:29.217075 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:14:29.221799 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:14:29.282772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:14:29.287271 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:14:29.326865 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:14:29.331278 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:14:29.406709 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:29.406781 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:29.406797 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:14:29.425760 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:14:29.454082 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:14:29.477597 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:29.507060 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:14:29.538079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:14:30.019138 ignition[720]: Ignition 2.19.0 Jan 17 00:14:30.019154 ignition[720]: Stage: fetch-offline Jan 17 00:14:30.019287 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:30.019306 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:14:30.019648 ignition[720]: parsed url from cmdline: "" Jan 17 00:14:30.019655 ignition[720]: no config URL provided Jan 17 00:14:30.019664 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:14:30.019680 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:14:30.019868 ignition[720]: op(1): [started] loading QEMU firmware config module Jan 17 00:14:30.019876 ignition[720]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:14:30.072874 ignition[720]: op(1): [finished] loading QEMU firmware config module Jan 17 00:14:30.140497 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:14:30.194905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:14:30.241165 ignition[720]: parsing config with SHA512: 07d15f4d0eedb24a9066e54cb57f6dc4b351339b4f8bf1c510b993cbddce23de5ee7e33df0a0e05ad18ba82887bdf69eb22e54d0c46ef8eed08fc4ad7d2c167b Jan 17 00:14:30.287722 unknown[720]: fetched base config from "system" Jan 17 00:14:30.287742 unknown[720]: fetched user config from "qemu" Jan 17 00:14:30.297134 ignition[720]: fetch-offline: fetch-offline passed Jan 17 00:14:30.300004 systemd-networkd[791]: lo: Link UP Jan 17 00:14:30.297344 ignition[720]: Ignition finished successfully Jan 17 00:14:30.300010 systemd-networkd[791]: lo: Gained carrier Jan 17 00:14:30.302835 systemd-networkd[791]: Enumeration completed Jan 17 00:14:30.305175 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:30.305181 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:14:30.307191 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:14:30.308650 systemd-networkd[791]: eth0: Link UP Jan 17 00:14:30.308656 systemd-networkd[791]: eth0: Gained carrier Jan 17 00:14:30.308666 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:30.326076 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:14:30.342665 systemd[1]: Reached target network.target - Network. Jan 17 00:14:30.351925 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:14:30.354623 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:14:30.420592 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:14:30.507392 ignition[795]: Ignition 2.19.0 Jan 17 00:14:30.507457 ignition[795]: Stage: kargs Jan 17 00:14:30.507853 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:30.507871 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:14:30.508844 ignition[795]: kargs: kargs passed Jan 17 00:14:30.508893 ignition[795]: Ignition finished successfully Jan 17 00:14:30.532639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:14:30.559140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:14:30.636935 ignition[803]: Ignition 2.19.0 Jan 17 00:14:30.636996 ignition[803]: Stage: disks Jan 17 00:14:30.645257 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:14:30.638178 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:30.655162 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:14:30.638380 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:14:30.663675 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:14:30.639816 ignition[803]: disks: disks passed Jan 17 00:14:30.677952 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:14:30.639882 ignition[803]: Ignition finished successfully Jan 17 00:14:30.678164 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:14:30.682604 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:14:30.714623 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:14:30.852338 systemd-fsck[813]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:14:30.871071 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:14:30.899770 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:14:31.274757 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:14:31.277292 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:14:31.290120 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:14:31.318836 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:14:31.335693 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:14:31.360873 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (821) Jan 17 00:14:31.336390 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:14:31.394881 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:31.394912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:31.394926 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:14:31.336466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:14:31.412052 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:14:31.336583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:14:31.418822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:14:31.428790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:14:31.448805 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:14:31.524904 systemd-networkd[791]: eth0: Gained IPv6LL Jan 17 00:14:31.559383 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:14:31.586616 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:14:31.602817 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:14:31.623386 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:14:31.943679 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:14:31.985794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:14:31.997682 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:14:32.033029 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:32.019796 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:14:32.077359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:14:32.187926 ignition[934]: INFO : Ignition 2.19.0 Jan 17 00:14:32.187926 ignition[934]: INFO : Stage: mount Jan 17 00:14:32.206379 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:32.206379 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:14:32.206379 ignition[934]: INFO : mount: mount passed Jan 17 00:14:32.206379 ignition[934]: INFO : Ignition finished successfully Jan 17 00:14:32.203753 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:14:32.233110 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:14:32.280124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:14:32.327019 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (947) Jan 17 00:14:32.330336 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:32.340842 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:32.340889 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:14:32.374589 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:14:32.383419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:14:32.485324 ignition[964]: INFO : Ignition 2.19.0 Jan 17 00:14:32.485324 ignition[964]: INFO : Stage: files Jan 17 00:14:32.497098 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:32.497098 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:14:32.497098 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:14:32.497098 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:14:32.497098 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:14:32.538812 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:14:32.538812 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:14:32.538812 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:14:32.538812 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:14:32.538812 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:14:32.508396 unknown[964]: wrote ssh authorized keys file for user: core Jan 17 00:14:32.645151 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:14:33.316429 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:14:33.326801 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:14:33.740975 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:14:34.886700 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:14:34.886700 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:14:34.919850 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:14:34.943198 ignition[964]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:14:35.131138 ignition[964]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:14:35.155177 ignition[964]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:14:35.168133 ignition[964]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:14:35.168133 ignition[964]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:14:35.186070 ignition[964]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:14:35.201261 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:14:35.211465 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:14:35.211465 ignition[964]: INFO : files: files passed Jan 17 00:14:35.211465 ignition[964]: INFO : Ignition finished successfully Jan 17 00:14:35.231773 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:14:35.270096 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:14:35.289279 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:14:35.300782 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:14:35.316966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:14:35.350062 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:14:35.355739 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:35.355739 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:35.391824 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:35.376312 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:14:35.408388 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:14:35.470088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:14:35.644768 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:14:35.660203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:14:35.684841 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:14:35.696399 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:14:35.711621 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:14:35.747412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:14:35.805115 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:14:35.826806 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:14:35.886910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:35.895112 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:35.917159 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:14:35.935383 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:14:35.941261 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:14:35.968201 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:14:35.986030 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:14:35.996584 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:14:35.996837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:14:36.012848 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:14:36.045275 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:14:36.077794 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:14:36.103593 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:14:36.107822 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:14:36.111749 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:14:36.130392 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:14:36.132628 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:14:36.175493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:36.186627 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:36.209855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:14:36.212623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:36.232302 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:14:36.232691 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:14:36.252769 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:14:36.252987 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:14:36.274111 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:14:36.303334 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:14:36.304648 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:36.380749 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:14:36.396960 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:14:36.416042 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:14:36.416322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:14:36.437912 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:14:36.454313 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:14:36.478915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:14:36.479379 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:14:36.498325 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:14:36.498789 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:14:36.586737 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:14:36.590649 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:14:36.606812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:14:36.607068 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:36.619379 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:14:36.619721 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:14:36.639786 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:14:36.639981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:14:36.706986 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:14:39.243681 ignition[1018]: INFO : Ignition 2.19.0 Jan 17 00:14:39.266022 ignition[1018]: INFO : Stage: umount Jan 17 00:14:39.266022 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:39.294167 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:14:39.309464 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:14:39.309464 ignition[1018]: INFO : umount: umount passed Jan 17 00:14:39.309464 ignition[1018]: INFO : Ignition finished successfully Jan 17 00:14:39.296988 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:14:39.310670 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:14:39.325494 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:14:39.346131 systemd[1]: Stopped target network.target - Network. Jan 17 00:14:39.393358 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:14:39.394753 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:14:39.413788 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:14:39.414186 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:14:39.435902 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:14:39.436104 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:14:39.444024 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:14:39.444336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:14:39.494162 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:14:39.494398 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:14:39.529180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:14:39.554402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:14:39.609656 systemd-networkd[791]: eth0: DHCPv6 lease lost Jan 17 00:14:39.628778 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:14:39.629303 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:14:39.653930 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:14:39.654293 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:14:39.703097 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:14:39.703334 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:39.731910 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:14:39.732143 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:14:39.732312 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:14:39.744655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:14:39.744782 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:39.786933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:14:39.787139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:39.797035 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:14:39.797183 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:39.807593 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:39.835478 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:14:39.835892 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:39.847295 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:14:39.847408 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:39.853596 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:14:39.853659 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:39.877496 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:14:39.877736 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:14:39.880572 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:14:39.880630 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:14:39.883588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:14:39.883755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:39.976497 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:14:39.989847 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:14:39.994750 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:40.005806 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:14:40.005927 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:40.025818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:14:40.026021 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:40.044307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:40.050018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:40.086209 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:14:40.086652 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:14:40.113747 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:14:40.115791 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:14:40.119798 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:14:40.151893 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:14:40.187065 systemd[1]: Switching root. Jan 17 00:14:40.233471 systemd-journald[193]: Journal stopped Jan 17 00:14:43.047656 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 17 00:14:43.047758 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:14:43.047777 kernel: SELinux: policy capability open_perms=1 Jan 17 00:14:43.047792 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:14:43.047813 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:14:43.047828 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:14:43.047843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:14:43.047866 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:14:43.047881 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:14:43.047896 kernel: audit: type=1403 audit(1768608880.611:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:14:43.047912 systemd[1]: Successfully loaded SELinux policy in 122.328ms. Jan 17 00:14:43.047943 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.101ms. Jan 17 00:14:43.047964 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:14:43.047981 systemd[1]: Detected virtualization kvm. Jan 17 00:14:43.047997 systemd[1]: Detected architecture x86-64. Jan 17 00:14:43.048013 systemd[1]: Detected first boot. Jan 17 00:14:43.048030 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:14:43.048046 zram_generator::config[1062]: No configuration found. Jan 17 00:14:43.048064 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:14:43.048079 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:14:43.048106 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:14:43.048122 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:14:43.048139 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:14:43.048156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:14:43.048172 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:14:43.048188 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:14:43.048205 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:14:43.048221 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:14:43.048318 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:14:43.048337 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:14:43.048353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:43.048370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:43.048386 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:14:43.048402 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:14:43.048418 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:14:43.048434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:14:43.048450 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:14:43.048470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:43.048486 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:14:43.048563 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:14:43.048582 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:14:43.048599 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:14:43.048615 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:43.048631 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:14:43.048647 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:14:43.048667 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:14:43.048683 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:14:43.048699 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:14:43.048715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:43.048770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:43.048787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:43.048803 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:14:43.048818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:14:43.048835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:14:43.048856 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:14:43.048872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:14:43.048888 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:14:43.048905 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:14:43.048921 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:14:43.048937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:14:43.048954 systemd[1]: Reached target machines.target - Containers. Jan 17 00:14:43.048970 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:14:43.048990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:14:43.049006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:14:43.049022 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:14:43.049038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:14:43.049054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:14:43.049070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:14:43.049086 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:14:43.049102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:14:43.049151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:14:43.049171 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:14:43.049188 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:14:43.049204 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:14:43.049220 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:14:43.049290 kernel: fuse: init (API version 7.39) Jan 17 00:14:43.049309 kernel: loop: module loaded Jan 17 00:14:43.049325 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:14:43.049342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:14:43.049358 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:14:43.049378 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:14:43.049422 systemd-journald[1146]: Collecting audit messages is disabled. Jan 17 00:14:43.049450 systemd-journald[1146]: Journal started Jan 17 00:14:43.049478 systemd-journald[1146]: Runtime Journal (/run/log/journal/2dada9fa3a644c169e8366e46af23dc9) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:14:41.747132 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:14:41.782321 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:14:41.784096 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:14:41.784845 systemd[1]: systemd-journald.service: Consumed 2.271s CPU time. Jan 17 00:14:43.072905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:14:43.087137 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:14:43.087200 systemd[1]: Stopped verity-setup.service. Jan 17 00:14:43.105599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:14:43.113635 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:14:43.126130 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:14:43.133414 kernel: ACPI: bus type drm_connector registered Jan 17 00:14:43.140447 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:14:43.152030 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:14:43.158111 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:14:43.169463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:14:43.175126 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:14:43.184082 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:14:43.190015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:43.197631 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:14:43.197923 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:14:43.207372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:14:43.207694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:14:43.220933 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:14:43.221219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:14:43.227069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:14:43.227415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:14:43.243092 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:14:43.243415 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:14:43.253593 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:14:43.254980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:14:43.266635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:43.272470 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:14:43.283424 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:14:43.328949 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:43.341693 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:14:43.383932 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:14:43.395929 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:14:43.403790 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:14:43.403901 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:14:43.410373 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:14:43.419112 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:14:43.428442 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:14:43.434914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:14:43.438217 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:14:43.445763 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:14:43.452377 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:14:43.454637 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:14:43.461382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:14:43.464411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:14:43.469041 systemd-journald[1146]: Time spent on flushing to /var/log/journal/2dada9fa3a644c169e8366e46af23dc9 is 25.710ms for 986 entries. Jan 17 00:14:43.469041 systemd-journald[1146]: System Journal (/var/log/journal/2dada9fa3a644c169e8366e46af23dc9) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:14:43.544226 systemd-journald[1146]: Received client request to flush runtime journal. Jan 17 00:14:43.549652 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:14:43.476780 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:14:43.492735 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:14:43.512831 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:14:43.526408 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:14:43.532410 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:14:43.538192 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:14:43.544204 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:14:43.552209 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:14:43.583769 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:14:43.617946 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:14:43.635096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:14:43.638811 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:43.657776 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 17 00:14:43.657802 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 17 00:14:43.658849 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:14:43.668175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:43.692131 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:14:43.937591 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:14:43.938843 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:14:43.955766 kernel: loop1: detected capacity change from 0 to 229808 Jan 17 00:14:44.073791 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:14:44.143041 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:14:44.203141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:14:44.314814 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:14:44.337616 kernel: loop4: detected capacity change from 0 to 229808 Jan 17 00:14:44.385974 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:14:44.454709 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 17 00:14:44.454735 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 17 00:14:44.477581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:44.479002 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:14:44.480750 (sd-merge)[1204]: Merged extensions into '/usr'. Jan 17 00:14:44.516096 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:14:44.516344 systemd[1]: Reloading... Jan 17 00:14:44.660902 zram_generator::config[1228]: No configuration found. Jan 17 00:14:44.965900 kernel: hrtimer: interrupt took 8888688 ns Jan 17 00:14:45.490041 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:14:45.598367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:14:45.817800 systemd[1]: Reloading finished in 1300 ms. Jan 17 00:14:45.891721 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:14:45.898135 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:14:45.928606 systemd[1]: Starting ensure-sysext.service... Jan 17 00:14:45.945939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:14:45.954018 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:14:45.954074 systemd[1]: Reloading... Jan 17 00:14:46.206442 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:14:46.208152 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:14:46.212812 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:14:46.213872 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 17 00:14:46.214081 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 17 00:14:46.231312 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:14:46.231931 systemd-tmpfiles[1270]: Skipping /boot Jan 17 00:14:46.305959 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:14:46.308994 systemd-tmpfiles[1270]: Skipping /boot Jan 17 00:14:46.363785 zram_generator::config[1299]: No configuration found. Jan 17 00:14:46.601123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:14:46.679394 systemd[1]: Reloading finished in 724 ms. Jan 17 00:14:46.707739 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:14:46.733835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:46.785319 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:14:46.803098 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:14:46.822989 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:14:46.837961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:14:46.874863 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:46.901009 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:14:46.907349 augenrules[1356]: No rules Jan 17 00:14:46.915137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:14:46.929037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:14:46.949847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:14:46.950236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:14:46.967957 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:14:46.977985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:14:46.985338 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jan 17 00:14:46.986827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:14:46.993813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:14:46.997441 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:14:47.007804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:14:47.013360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:14:47.027722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:14:47.040148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:14:47.040486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:14:47.047829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:14:47.048351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:14:47.058406 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:14:47.060054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:14:47.069098 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:14:47.079909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:47.088467 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:14:47.133412 systemd[1]: Finished ensure-sysext.service. Jan 17 00:14:47.151765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:14:47.152102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:14:47.159999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:14:47.172970 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:14:47.185080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:14:47.196973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:14:47.208488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:14:47.218813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:14:47.236491 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:14:47.243366 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:14:47.243464 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:14:47.244332 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:14:47.252163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:14:47.252673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:14:47.260355 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:14:47.261010 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:14:47.268104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:14:47.268814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:14:47.277077 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:14:47.277586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:14:47.291594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1376) Jan 17 00:14:47.305714 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:14:47.312466 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:14:47.317107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:14:47.317380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:14:47.341772 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:14:47.633374 systemd-resolved[1349]: Positive Trust Anchors: Jan 17 00:14:47.636444 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:14:47.636488 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:14:47.779387 systemd-resolved[1349]: Defaulting to hostname 'linux'. Jan 17 00:14:47.781152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:14:49.333471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:14:49.341095 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:49.408448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:14:49.490242 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:14:49.499630 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:14:49.499959 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:14:49.500434 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:14:49.500920 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:14:49.526672 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:14:49.641717 systemd-networkd[1401]: lo: Link UP Jan 17 00:14:49.642742 systemd-networkd[1401]: lo: Gained carrier Jan 17 00:14:49.651740 systemd-networkd[1401]: Enumeration completed Jan 17 00:14:49.652244 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:14:49.666495 systemd[1]: Reached target network.target - Network. Jan 17 00:14:49.684193 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:49.684210 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:14:49.696576 systemd-networkd[1401]: eth0: Link UP Jan 17 00:14:49.696670 systemd-networkd[1401]: eth0: Gained carrier Jan 17 00:14:49.696765 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:49.698458 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:14:49.718229 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:14:49.734192 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:14:49.741471 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 17 00:14:50.628909 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:14:50.629096 systemd-timesyncd[1405]: Initial clock synchronization to Sat 2026-01-17 00:14:50.628764 UTC. Jan 17 00:14:50.631508 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:14:50.632130 systemd-resolved[1349]: Clock change detected. Flushing caches. Jan 17 00:14:50.768562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:50.791757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:50.792252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:50.805167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:51.590073 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:14:51.895862 kernel: kvm_amd: TSC scaling supported Jan 17 00:14:51.896235 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:14:51.896265 kernel: kvm_amd: Nested Paging enabled Jan 17 00:14:51.899820 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:14:51.904819 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:14:51.923827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:52.399206 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:14:52.451031 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:14:52.476421 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:14:52.506618 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:14:52.571461 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 17 00:14:52.575612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:14:52.586083 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:14:52.626257 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:14:52.638890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:52.652445 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:14:52.659140 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:14:52.666804 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:14:52.675093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:14:52.682499 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:14:52.691638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:14:52.700287 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:14:52.701545 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:14:52.706825 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:14:52.714598 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:14:52.722540 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:14:52.753853 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:14:52.766830 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:14:52.778283 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:14:52.787546 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:14:52.794115 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:14:52.799177 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:14:52.801150 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:14:52.800561 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:14:52.803312 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:14:52.813083 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:14:52.820914 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:14:52.829589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:14:52.874226 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:14:52.893387 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:14:52.901052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:14:52.954500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:14:52.955797 jq[1445]: false Jan 17 00:14:52.966884 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:14:52.970339 extend-filesystems[1446]: Found loop3 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found loop4 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found loop5 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found sr0 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda1 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda2 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda3 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found usr Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda4 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda6 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda7 Jan 17 00:14:52.970339 extend-filesystems[1446]: Found vda9 Jan 17 00:14:52.970339 extend-filesystems[1446]: Checking size of /dev/vda9 Jan 17 00:14:53.101487 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1380) Jan 17 00:14:53.101527 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:14:53.101547 extend-filesystems[1446]: Resized partition /dev/vda9 Jan 17 00:14:53.003336 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:14:53.063607 dbus-daemon[1444]: [system] SELinux support is enabled Jan 17 00:14:53.119648 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:14:53.055298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:14:53.069279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:14:53.114098 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:14:53.131823 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:14:53.132650 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:14:53.138389 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:14:53.166445 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:14:53.177379 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:14:53.188069 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:14:53.200323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:14:53.200876 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:14:53.206696 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:14:53.207310 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:14:53.210258 update_engine[1471]: I20260117 00:14:53.209928 1471 main.cc:92] Flatcar Update Engine starting Jan 17 00:14:53.213370 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:14:53.216070 update_engine[1471]: I20260117 00:14:53.215942 1471 update_check_scheduler.cc:74] Next update check in 2m36s Jan 17 00:14:53.225652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:14:53.226195 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:14:53.235459 jq[1474]: true Jan 17 00:14:53.239075 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:14:53.296217 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:14:53.296217 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:14:53.296217 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:14:53.296118 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:14:53.325047 jq[1480]: true Jan 17 00:14:53.325310 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Jan 17 00:14:53.296444 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:14:53.340944 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:14:53.359787 systemd-logind[1466]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:14:53.359824 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:14:53.360463 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:14:53.361271 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:14:53.361872 systemd-logind[1466]: New seat seat0. Jan 17 00:14:53.370455 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:14:53.385907 dbus-daemon[1444]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:14:53.390510 tar[1479]: linux-amd64/LICENSE Jan 17 00:14:53.390510 tar[1479]: linux-amd64/helm Jan 17 00:14:53.406875 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:14:53.437584 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:14:53.438686 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:14:53.439091 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:14:53.493292 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:14:53.493633 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:14:53.711039 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:14:53.738531 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:14:53.863678 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:14:53.866155 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:14:53.876780 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:14:53.908586 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:14:53.936499 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:14:54.279648 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:14:54.283385 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:14:54.302772 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:14:54.591922 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:14:54.685637 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:14:54.800126 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:14:54.825023 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:14:54.844290 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:14:57.078700 containerd[1481]: time="2026-01-17T00:14:57.078269816Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:14:57.530546 containerd[1481]: time="2026-01-17T00:14:57.530225303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.553201 containerd[1481]: time="2026-01-17T00:14:57.552884436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:14:57.553382 containerd[1481]: time="2026-01-17T00:14:57.553357709Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:14:57.553598 containerd[1481]: time="2026-01-17T00:14:57.553574765Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:14:57.554161 containerd[1481]: time="2026-01-17T00:14:57.554132556Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:14:57.554253 containerd[1481]: time="2026-01-17T00:14:57.554230479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.554430 containerd[1481]: time="2026-01-17T00:14:57.554400988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:14:57.554528 containerd[1481]: time="2026-01-17T00:14:57.554505523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.559112 containerd[1481]: time="2026-01-17T00:14:57.559079053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:14:57.559200 containerd[1481]: time="2026-01-17T00:14:57.559178429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.559304 containerd[1481]: time="2026-01-17T00:14:57.559281582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:14:57.559378 containerd[1481]: time="2026-01-17T00:14:57.559357874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.559708 containerd[1481]: time="2026-01-17T00:14:57.559601368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.560663 containerd[1481]: time="2026-01-17T00:14:57.560519062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:14:57.561188 containerd[1481]: time="2026-01-17T00:14:57.560920202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:14:57.561188 containerd[1481]: time="2026-01-17T00:14:57.561086131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:14:57.561610 containerd[1481]: time="2026-01-17T00:14:57.561375552Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:14:57.561610 containerd[1481]: time="2026-01-17T00:14:57.561582719Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:14:57.576275 containerd[1481]: time="2026-01-17T00:14:57.576219809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:14:57.576670 containerd[1481]: time="2026-01-17T00:14:57.576607423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:14:57.576877 containerd[1481]: time="2026-01-17T00:14:57.576835078Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:14:57.577052 containerd[1481]: time="2026-01-17T00:14:57.576949761Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:14:57.577151 containerd[1481]: time="2026-01-17T00:14:57.577127694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:14:57.577472 containerd[1481]: time="2026-01-17T00:14:57.577449435Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:14:57.578367 containerd[1481]: time="2026-01-17T00:14:57.578342753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:14:57.578647 containerd[1481]: time="2026-01-17T00:14:57.578625461Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:14:57.578796 containerd[1481]: time="2026-01-17T00:14:57.578716621Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:14:57.578871 containerd[1481]: time="2026-01-17T00:14:57.578853938Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:14:57.579038 containerd[1481]: time="2026-01-17T00:14:57.578933597Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579218 containerd[1481]: time="2026-01-17T00:14:57.579194463Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579336 containerd[1481]: time="2026-01-17T00:14:57.579317863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579418 containerd[1481]: time="2026-01-17T00:14:57.579396400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579524 containerd[1481]: time="2026-01-17T00:14:57.579501276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579607 containerd[1481]: time="2026-01-17T00:14:57.579587036Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579799 containerd[1481]: time="2026-01-17T00:14:57.579714765Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.579893 containerd[1481]: time="2026-01-17T00:14:57.579870716Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:14:57.580158 containerd[1481]: time="2026-01-17T00:14:57.580130191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.580280 containerd[1481]: time="2026-01-17T00:14:57.580258821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.580378 containerd[1481]: time="2026-01-17T00:14:57.580356553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.580471 containerd[1481]: time="2026-01-17T00:14:57.580449968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.580616 containerd[1481]: time="2026-01-17T00:14:57.580594287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.580692 containerd[1481]: time="2026-01-17T00:14:57.580676010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.580841 containerd[1481]: time="2026-01-17T00:14:57.580820961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581077 containerd[1481]: time="2026-01-17T00:14:57.580931648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581180 containerd[1481]: time="2026-01-17T00:14:57.581158792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581268 containerd[1481]: time="2026-01-17T00:14:57.581247056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581361 containerd[1481]: time="2026-01-17T00:14:57.581339959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581574 containerd[1481]: time="2026-01-17T00:14:57.581479871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581682 containerd[1481]: time="2026-01-17T00:14:57.581660058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.581859 containerd[1481]: time="2026-01-17T00:14:57.581837659Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:14:57.582128 containerd[1481]: time="2026-01-17T00:14:57.582100570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.582313 containerd[1481]: time="2026-01-17T00:14:57.582289613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.582396 containerd[1481]: time="2026-01-17T00:14:57.582375363Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:14:57.582840 containerd[1481]: time="2026-01-17T00:14:57.582812309Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:14:57.582944 containerd[1481]: time="2026-01-17T00:14:57.582918187Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:14:57.583119 containerd[1481]: time="2026-01-17T00:14:57.583096830Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:14:57.583234 containerd[1481]: time="2026-01-17T00:14:57.583209490Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:14:57.583377 containerd[1481]: time="2026-01-17T00:14:57.583354952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.583545 containerd[1481]: time="2026-01-17T00:14:57.583522165Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:14:57.583625 containerd[1481]: time="2026-01-17T00:14:57.583606162Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:14:57.583718 containerd[1481]: time="2026-01-17T00:14:57.583694777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:14:57.585349 containerd[1481]: time="2026-01-17T00:14:57.585235955Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:14:57.585782 containerd[1481]: time="2026-01-17T00:14:57.585707595Z" level=info msg="Connect containerd service" Jan 17 00:14:57.586057 containerd[1481]: time="2026-01-17T00:14:57.586032052Z" level=info msg="using legacy CRI server" Jan 17 00:14:57.586132 containerd[1481]: time="2026-01-17T00:14:57.586114816Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:14:57.586653 containerd[1481]: time="2026-01-17T00:14:57.586623736Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.593687970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.594944965Z" level=info msg="Start subscribing containerd event" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.595158945Z" level=info msg="Start recovering state" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.595556787Z" level=info msg="Start event monitor" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.595649571Z" level=info msg="Start snapshots syncer" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.595664218Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:14:57.596866 containerd[1481]: time="2026-01-17T00:14:57.595674828Z" level=info msg="Start streaming server" Jan 17 00:14:57.598092 containerd[1481]: time="2026-01-17T00:14:57.598036450Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:14:57.598189 containerd[1481]: time="2026-01-17T00:14:57.598146676Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:14:57.598461 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:14:57.605667 containerd[1481]: time="2026-01-17T00:14:57.605597011Z" level=info msg="containerd successfully booted in 0.532234s" Jan 17 00:14:57.653313 tar[1479]: linux-amd64/README.md Jan 17 00:14:57.687445 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:15:01.239038 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:15:01.280520 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:48170.service - OpenSSH per-connection server daemon (10.0.0.1:48170). Jan 17 00:15:01.572606 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 48170 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:15:01.582261 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:01.610516 systemd-logind[1466]: New session 1 of user core. Jan 17 00:15:01.614040 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:15:01.877950 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:15:01.979121 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:15:02.166445 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:15:02.427729 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:15:02.522675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:02.524104 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:15:02.563873 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:02.801816 systemd[1559]: Queued start job for default target default.target. Jan 17 00:15:02.815071 systemd[1559]: Created slice app.slice - User Application Slice. Jan 17 00:15:02.815104 systemd[1559]: Reached target paths.target - Paths. Jan 17 00:15:02.815122 systemd[1559]: Reached target timers.target - Timers. Jan 17 00:15:02.930702 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:15:03.206642 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:15:03.212706 systemd[1559]: Reached target sockets.target - Sockets. Jan 17 00:15:03.212840 systemd[1559]: Reached target basic.target - Basic System. Jan 17 00:15:03.212932 systemd[1559]: Reached target default.target - Main User Target. Jan 17 00:15:03.213082 systemd[1559]: Startup finished in 741ms. Jan 17 00:15:03.213413 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:15:03.225477 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:15:03.227455 systemd[1]: Startup finished in 4.980s (kernel) + 17.544s (initrd) + 21.850s (userspace) = 44.376s. Jan 17 00:15:03.533323 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:34116.service - OpenSSH per-connection server daemon (10.0.0.1:34116). Jan 17 00:15:03.953677 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 34116 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:15:03.981106 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:04.174453 systemd-logind[1466]: New session 2 of user core. Jan 17 00:15:04.200342 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:15:04.298616 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:04.575871 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:34116.service: Deactivated successfully. Jan 17 00:15:04.589544 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:15:04.613573 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:15:04.689662 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:34122.service - OpenSSH per-connection server daemon (10.0.0.1:34122). Jan 17 00:15:04.697351 systemd-logind[1466]: Removed session 2. Jan 17 00:15:05.252209 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 34122 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:15:05.274329 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:05.298518 systemd-logind[1466]: New session 3 of user core. Jan 17 00:15:05.317462 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:15:05.424282 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:05.447629 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:34122.service: Deactivated successfully. Jan 17 00:15:05.464094 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:15:05.467906 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:15:05.500197 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:34136.service - OpenSSH per-connection server daemon (10.0.0.1:34136). Jan 17 00:15:05.517396 systemd-logind[1466]: Removed session 3. Jan 17 00:15:05.739867 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 34136 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:15:05.757369 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:05.779914 systemd-logind[1466]: New session 4 of user core. Jan 17 00:15:05.789560 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:15:06.153694 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:06.241292 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:34136.service: Deactivated successfully. Jan 17 00:15:06.394045 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:15:06.469708 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:15:06.493172 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:34148.service - OpenSSH per-connection server daemon (10.0.0.1:34148). Jan 17 00:15:06.498950 systemd-logind[1466]: Removed session 4. Jan 17 00:15:06.727471 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 34148 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:15:06.741440 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:06.902376 systemd-logind[1466]: New session 5 of user core. Jan 17 00:15:06.916876 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:15:07.116403 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:15:07.117718 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:09.478055 kubelet[1566]: E0117 00:15:09.474136 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:09.487310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:09.487852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:09.490399 systemd[1]: kubelet.service: Consumed 9.762s CPU time. Jan 17 00:15:14.866771 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:15:14.867478 (dockerd)[1626]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:15:19.678752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:15:19.705551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:21.767678 dockerd[1626]: time="2026-01-17T00:15:21.767141606Z" level=info msg="Starting up" Jan 17 00:15:22.907349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:22.952530 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:23.703731 dockerd[1626]: time="2026-01-17T00:15:23.697612019Z" level=info msg="Loading containers: start." Jan 17 00:15:23.993774 kubelet[1656]: E0117 00:15:23.993394 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:24.006764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:24.007136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:24.008224 systemd[1]: kubelet.service: Consumed 2.763s CPU time. Jan 17 00:15:24.820611 kernel: Initializing XFRM netlink socket Jan 17 00:15:25.212441 systemd-networkd[1401]: docker0: Link UP Jan 17 00:15:25.266431 dockerd[1626]: time="2026-01-17T00:15:25.266261538Z" level=info msg="Loading containers: done." Jan 17 00:15:25.423911 dockerd[1626]: time="2026-01-17T00:15:25.423509680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:15:25.425488 dockerd[1626]: time="2026-01-17T00:15:25.425186260Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:15:25.425646 dockerd[1626]: time="2026-01-17T00:15:25.425550843Z" level=info msg="Daemon has completed initialization" Jan 17 00:15:25.692021 dockerd[1626]: time="2026-01-17T00:15:25.688867746Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:15:25.692191 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:15:30.128738 containerd[1481]: time="2026-01-17T00:15:30.128047628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:15:31.777363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652849905.mount: Deactivated successfully. Jan 17 00:15:34.125841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:15:34.194888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:35.100024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:35.145120 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:35.791456 kubelet[1851]: E0117 00:15:35.789323 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:35.799507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:35.801041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:35.802266 systemd[1]: kubelet.service: Consumed 1.296s CPU time. Jan 17 00:15:38.866857 update_engine[1471]: I20260117 00:15:38.852743 1471 update_attempter.cc:509] Updating boot flags... Jan 17 00:15:39.384152 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1876) Jan 17 00:15:39.655342 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1875) Jan 17 00:15:40.160935 containerd[1481]: time="2026-01-17T00:15:40.160299699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.171054 containerd[1481]: time="2026-01-17T00:15:40.170859457Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 17 00:15:40.179692 containerd[1481]: time="2026-01-17T00:15:40.178716099Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.197015 containerd[1481]: time="2026-01-17T00:15:40.196841535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.202680 containerd[1481]: time="2026-01-17T00:15:40.201939536Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 10.073705843s" Jan 17 00:15:40.202680 containerd[1481]: time="2026-01-17T00:15:40.202264400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:15:40.218563 containerd[1481]: time="2026-01-17T00:15:40.218471391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:15:44.914212 containerd[1481]: time="2026-01-17T00:15:44.913518941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:44.916229 containerd[1481]: time="2026-01-17T00:15:44.915799050Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 17 00:15:44.917404 containerd[1481]: time="2026-01-17T00:15:44.917241322Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:44.923072 containerd[1481]: time="2026-01-17T00:15:44.923004278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:44.923876 containerd[1481]: time="2026-01-17T00:15:44.923804323Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 4.705282668s" Jan 17 00:15:44.926298 containerd[1481]: time="2026-01-17T00:15:44.923908276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:15:44.962910 containerd[1481]: time="2026-01-17T00:15:44.957547687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:15:45.913902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:15:46.190449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:47.858668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:47.872484 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:48.144463 kubelet[1895]: E0117 00:15:48.144394 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:48.162378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:48.162920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:48.164101 systemd[1]: kubelet.service: Consumed 1.257s CPU time. Jan 17 00:15:52.214144 containerd[1481]: time="2026-01-17T00:15:52.213066585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:52.224322 containerd[1481]: time="2026-01-17T00:15:52.223296142Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 17 00:15:52.226176 containerd[1481]: time="2026-01-17T00:15:52.226095570Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:52.234624 containerd[1481]: time="2026-01-17T00:15:52.234407001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:52.236725 containerd[1481]: time="2026-01-17T00:15:52.236411507Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 7.278647557s" Jan 17 00:15:52.240368 containerd[1481]: time="2026-01-17T00:15:52.237870878Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:15:52.251904 containerd[1481]: time="2026-01-17T00:15:52.251275560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:15:58.163783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953307306.mount: Deactivated successfully. Jan 17 00:15:58.381254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:15:58.403292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:58.973306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:59.085777 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:59.428903 kubelet[1924]: E0117 00:15:59.428288 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:59.436124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:59.436715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:16:00.664077 containerd[1481]: time="2026-01-17T00:16:00.663368245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:00.666278 containerd[1481]: time="2026-01-17T00:16:00.665740991Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:16:00.669920 containerd[1481]: time="2026-01-17T00:16:00.669845680Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:00.673129 containerd[1481]: time="2026-01-17T00:16:00.672930975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:00.675045 containerd[1481]: time="2026-01-17T00:16:00.674923169Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 8.423599618s" Jan 17 00:16:00.675196 containerd[1481]: time="2026-01-17T00:16:00.675131588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:16:00.678284 containerd[1481]: time="2026-01-17T00:16:00.678221704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:16:01.601746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645363873.mount: Deactivated successfully. Jan 17 00:16:07.162304 containerd[1481]: time="2026-01-17T00:16:07.157452873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:07.162304 containerd[1481]: time="2026-01-17T00:16:07.158747121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 17 00:16:07.204618 containerd[1481]: time="2026-01-17T00:16:07.203643299Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:07.263311 containerd[1481]: time="2026-01-17T00:16:07.262516209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:07.273266 containerd[1481]: time="2026-01-17T00:16:07.273131656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 6.594825966s" Jan 17 00:16:07.273266 containerd[1481]: time="2026-01-17T00:16:07.273246720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:16:07.297828 containerd[1481]: time="2026-01-17T00:16:07.293482903Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:16:08.875204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594649434.mount: Deactivated successfully. Jan 17 00:16:08.887677 containerd[1481]: time="2026-01-17T00:16:08.887496893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:08.891882 containerd[1481]: time="2026-01-17T00:16:08.891664209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:16:08.895737 containerd[1481]: time="2026-01-17T00:16:08.895477515Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:08.903431 containerd[1481]: time="2026-01-17T00:16:08.903359748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:08.907122 containerd[1481]: time="2026-01-17T00:16:08.905740249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.609999656s" Jan 17 00:16:08.907122 containerd[1481]: time="2026-01-17T00:16:08.905880822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:16:08.910313 containerd[1481]: time="2026-01-17T00:16:08.910187379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:16:09.689459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:16:09.718467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:10.512319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065180063.mount: Deactivated successfully. Jan 17 00:16:10.760723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:10.780746 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:16:11.005866 kubelet[2000]: E0117 00:16:11.005794 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:16:11.016111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:16:11.016652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:16:21.129236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:16:21.147485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:22.315587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:22.323401 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:16:22.783835 kubelet[2065]: E0117 00:16:22.783445 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:16:22.793802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:16:22.794339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:16:22.795411 systemd[1]: kubelet.service: Consumed 1.088s CPU time. Jan 17 00:16:26.375286 containerd[1481]: time="2026-01-17T00:16:26.373285294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:26.398680 containerd[1481]: time="2026-01-17T00:16:26.395084257Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 17 00:16:26.420728 containerd[1481]: time="2026-01-17T00:16:26.409542125Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:26.497208 containerd[1481]: time="2026-01-17T00:16:26.486187394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:26.565543 containerd[1481]: time="2026-01-17T00:16:26.562866650Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 17.652329118s" Jan 17 00:16:26.565543 containerd[1481]: time="2026-01-17T00:16:26.564162699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:16:32.886389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 00:16:32.915796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:33.670163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:33.715789 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:16:34.182551 kubelet[2109]: E0117 00:16:34.182414 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:16:34.193496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:16:34.193896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:16:38.389112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:38.412479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:38.513150 systemd[1]: Reloading requested from client PID 2125 ('systemctl') (unit session-5.scope)... Jan 17 00:16:38.513169 systemd[1]: Reloading... Jan 17 00:16:38.903759 zram_generator::config[2167]: No configuration found. Jan 17 00:16:39.265281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:16:39.482547 systemd[1]: Reloading finished in 967 ms. Jan 17 00:16:39.657410 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:16:39.659531 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:16:39.663537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:39.700817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:40.163231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:40.206495 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:16:40.757152 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:16:40.757152 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:16:40.757152 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:16:40.757152 kubelet[2213]: I0117 00:16:40.755523 2213 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:16:41.986845 kubelet[2213]: I0117 00:16:41.982785 2213 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:16:41.986845 kubelet[2213]: I0117 00:16:41.982862 2213 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:16:41.986845 kubelet[2213]: I0117 00:16:41.983771 2213 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:16:42.257806 kubelet[2213]: E0117 00:16:42.256581 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:16:42.265892 kubelet[2213]: I0117 00:16:42.260492 2213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:16:42.324497 kubelet[2213]: E0117 00:16:42.324084 2213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:16:42.324497 kubelet[2213]: I0117 00:16:42.324181 2213 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:16:42.379775 kubelet[2213]: I0117 00:16:42.372423 2213 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:16:42.379775 kubelet[2213]: I0117 00:16:42.373420 2213 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:16:42.379775 kubelet[2213]: I0117 00:16:42.373463 2213 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:16:42.379775 kubelet[2213]: I0117 00:16:42.374305 2213 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:16:42.380306 kubelet[2213]: I0117 00:16:42.374343 2213 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:16:42.392813 kubelet[2213]: I0117 00:16:42.389373 2213 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:16:42.410410 kubelet[2213]: I0117 00:16:42.407325 2213 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:16:42.410410 kubelet[2213]: I0117 00:16:42.409366 2213 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:16:42.410410 kubelet[2213]: I0117 00:16:42.409519 2213 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:16:42.410410 kubelet[2213]: I0117 00:16:42.409591 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:16:42.463406 kubelet[2213]: E0117 00:16:42.463153 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:16:42.467572 kubelet[2213]: E0117 00:16:42.466807 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:16:42.472594 kubelet[2213]: I0117 00:16:42.471452 2213 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:16:42.472594 kubelet[2213]: I0117 00:16:42.472500 2213 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:16:42.492803 kubelet[2213]: W0117 00:16:42.483945 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:16:42.508292 kubelet[2213]: I0117 00:16:42.505477 2213 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:16:42.521436 kubelet[2213]: I0117 00:16:42.517877 2213 server.go:1289] "Started kubelet" Jan 17 00:16:42.539380 kubelet[2213]: I0117 00:16:42.537067 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:16:42.544018 kubelet[2213]: I0117 00:16:42.543018 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:16:42.544234 kubelet[2213]: I0117 00:16:42.544198 2213 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:16:42.550629 kubelet[2213]: I0117 00:16:42.549824 2213 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:16:42.553434 kubelet[2213]: I0117 00:16:42.552893 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:16:42.559084 kubelet[2213]: I0117 00:16:42.558574 2213 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:16:42.564878 kubelet[2213]: E0117 00:16:42.557590 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5c846ce627a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:16:42.505512873 +0000 UTC m=+2.259585047,LastTimestamp:2026-01-17 00:16:42.505512873 +0000 UTC m=+2.259585047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:16:42.564878 kubelet[2213]: E0117 00:16:42.562644 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:16:42.564878 kubelet[2213]: I0117 00:16:42.562855 2213 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:16:42.564878 kubelet[2213]: I0117 00:16:42.563256 2213 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:16:42.564878 kubelet[2213]: I0117 00:16:42.563507 2213 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:16:42.564878 kubelet[2213]: E0117 00:16:42.564240 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:16:42.590811 kubelet[2213]: E0117 00:16:42.588810 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Jan 17 00:16:42.590811 kubelet[2213]: I0117 00:16:42.589477 2213 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:16:42.590811 kubelet[2213]: I0117 00:16:42.589590 2213 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:16:42.623915 kubelet[2213]: E0117 00:16:42.617713 2213 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:16:42.684492 kubelet[2213]: E0117 00:16:42.683081 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:16:42.735456 kubelet[2213]: W0117 00:16:42.735356 2213 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: failed to write client preface: write unix @->/run/containerd/containerd.sock: use of closed network connection" Jan 17 00:16:42.799359 kubelet[2213]: E0117 00:16:42.798091 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Jan 17 00:16:42.815580 kubelet[2213]: E0117 00:16:42.813327 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:16:42.893495 kubelet[2213]: I0117 00:16:42.888590 2213 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:16:42.941517 kubelet[2213]: E0117 00:16:42.938804 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:16:42.968863 kubelet[2213]: I0117 00:16:42.968551 2213 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:16:42.978198 kubelet[2213]: I0117 00:16:42.977227 2213 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:16:42.978198 kubelet[2213]: I0117 00:16:42.977401 2213 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:16:42.978198 kubelet[2213]: I0117 00:16:42.977513 2213 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:16:42.978198 kubelet[2213]: I0117 00:16:42.977550 2213 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:16:42.978198 kubelet[2213]: E0117 00:16:42.977609 2213 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:16:42.989420 kubelet[2213]: I0117 00:16:42.989372 2213 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:16:42.989420 kubelet[2213]: I0117 00:16:42.989399 2213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:16:42.989420 kubelet[2213]: I0117 00:16:42.989423 2213 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:16:42.999497 kubelet[2213]: E0117 00:16:42.999381 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:16:43.007942 kubelet[2213]: I0117 00:16:43.006424 2213 policy_none.go:49] "None policy: Start" Jan 17 00:16:43.007942 kubelet[2213]: I0117 00:16:43.006633 2213 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:16:43.007942 kubelet[2213]: I0117 00:16:43.006872 2213 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:16:43.086329 kubelet[2213]: E0117 00:16:43.066753 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:16:43.086329 kubelet[2213]: E0117 00:16:43.080542 2213 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:16:43.129549 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:16:43.167915 kubelet[2213]: E0117 00:16:43.167519 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:16:43.184060 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:16:43.203880 kubelet[2213]: E0117 00:16:43.202303 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Jan 17 00:16:43.206465 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:16:43.239152 kubelet[2213]: E0117 00:16:43.237771 2213 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:16:43.251610 kubelet[2213]: I0117 00:16:43.251423 2213 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:16:43.251800 kubelet[2213]: I0117 00:16:43.251572 2213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:16:43.255631 kubelet[2213]: I0117 00:16:43.255388 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:16:43.268809 kubelet[2213]: E0117 00:16:43.265870 2213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:16:43.268809 kubelet[2213]: E0117 00:16:43.266213 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:16:43.368934 kubelet[2213]: I0117 00:16:43.368123 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:16:43.462206 kubelet[2213]: E0117 00:16:43.383141 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 17 00:16:43.410029 systemd[1]: Created slice kubepods-burstable-pod29988a9444d40c251f2061369746f5ec.slice - libcontainer container kubepods-burstable-pod29988a9444d40c251f2061369746f5ec.slice. Jan 17 00:16:43.485884 kubelet[2213]: I0117 00:16:43.480738 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:16:43.485884 kubelet[2213]: I0117 00:16:43.480868 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:16:43.492407 kubelet[2213]: I0117 00:16:43.489487 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:16:43.492407 kubelet[2213]: I0117 00:16:43.490330 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:16:43.492407 kubelet[2213]: I0117 00:16:43.490486 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29988a9444d40c251f2061369746f5ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"29988a9444d40c251f2061369746f5ec\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:16:43.492407 kubelet[2213]: I0117 00:16:43.490638 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29988a9444d40c251f2061369746f5ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"29988a9444d40c251f2061369746f5ec\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:16:43.495223 kubelet[2213]: I0117 00:16:43.494228 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29988a9444d40c251f2061369746f5ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"29988a9444d40c251f2061369746f5ec\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:16:43.495223 kubelet[2213]: I0117 00:16:43.495030 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:16:43.497296 kubelet[2213]: I0117 00:16:43.496436 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:16:43.510734 kubelet[2213]: E0117 00:16:43.510133 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:43.531585 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 17 00:16:43.565522 kubelet[2213]: E0117 00:16:43.559809 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:43.590129 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 17 00:16:43.602848 kubelet[2213]: I0117 00:16:43.601406 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:16:43.602848 kubelet[2213]: E0117 00:16:43.602155 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 17 00:16:43.624151 kubelet[2213]: E0117 00:16:43.621173 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:43.634045 kubelet[2213]: E0117 00:16:43.627637 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:43.657868 containerd[1481]: time="2026-01-17T00:16:43.655776228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:43.789615 kubelet[2213]: E0117 00:16:43.789444 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:16:43.819428 kubelet[2213]: E0117 00:16:43.816573 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:43.826662 containerd[1481]: time="2026-01-17T00:16:43.824620143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:29988a9444d40c251f2061369746f5ec,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:43.870868 kubelet[2213]: E0117 00:16:43.870623 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:43.872642 containerd[1481]: time="2026-01-17T00:16:43.872386794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:43.980219 kubelet[2213]: E0117 00:16:43.979188 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:16:44.019109 kubelet[2213]: I0117 00:16:44.017784 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:16:44.019109 kubelet[2213]: E0117 00:16:44.017797 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:16:44.020237 kubelet[2213]: E0117 00:16:44.019927 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Jan 17 00:16:44.021254 kubelet[2213]: E0117 00:16:44.020929 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 17 00:16:44.064896 kubelet[2213]: E0117 00:16:44.062360 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:16:44.414434 kubelet[2213]: E0117 00:16:44.402718 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:16:44.727760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231663932.mount: Deactivated successfully. Jan 17 00:16:44.789237 containerd[1481]: time="2026-01-17T00:16:44.787252854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:16:44.800219 containerd[1481]: time="2026-01-17T00:16:44.800154596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:16:44.809814 containerd[1481]: time="2026-01-17T00:16:44.806180239Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:16:44.822063 containerd[1481]: time="2026-01-17T00:16:44.815379216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:16:44.824840 kubelet[2213]: I0117 00:16:44.824509 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:16:44.830122 containerd[1481]: time="2026-01-17T00:16:44.829441207Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:16:44.834190 kubelet[2213]: E0117 00:16:44.833367 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 17 00:16:44.840826 containerd[1481]: time="2026-01-17T00:16:44.839503689Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:16:44.857369 containerd[1481]: time="2026-01-17T00:16:44.857258805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:16:44.861055 containerd[1481]: time="2026-01-17T00:16:44.860908422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:16:44.867349 containerd[1481]: time="2026-01-17T00:16:44.861928120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.20570651s" Jan 17 00:16:44.870038 containerd[1481]: time="2026-01-17T00:16:44.869595208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 997.057571ms" Jan 17 00:16:44.896060 containerd[1481]: time="2026-01-17T00:16:44.895896871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.068508077s" Jan 17 00:16:45.611716 containerd[1481]: time="2026-01-17T00:16:45.610766127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:45.611716 containerd[1481]: time="2026-01-17T00:16:45.611064486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:45.611716 containerd[1481]: time="2026-01-17T00:16:45.611080505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.611716 containerd[1481]: time="2026-01-17T00:16:45.611174461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.631067 kubelet[2213]: E0117 00:16:45.627822 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="3.2s" Jan 17 00:16:45.639838 containerd[1481]: time="2026-01-17T00:16:45.638453089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:45.639838 containerd[1481]: time="2026-01-17T00:16:45.638529250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:45.639838 containerd[1481]: time="2026-01-17T00:16:45.638548467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.639838 containerd[1481]: time="2026-01-17T00:16:45.638728432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.654563 containerd[1481]: time="2026-01-17T00:16:45.651387114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:45.654563 containerd[1481]: time="2026-01-17T00:16:45.651470700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:45.654563 containerd[1481]: time="2026-01-17T00:16:45.651495176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.654563 containerd[1481]: time="2026-01-17T00:16:45.651619247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:45.892353 systemd[1]: Started cri-containerd-b6c2706e2eb1cc5f9cec3a6864cdf5ff5f5a44586e1ab67fa45aede9fe120742.scope - libcontainer container b6c2706e2eb1cc5f9cec3a6864cdf5ff5f5a44586e1ab67fa45aede9fe120742. Jan 17 00:16:45.896845 systemd[1]: Started cri-containerd-f462439fb73b2b0715538e3f0885f8e590d7c4930dab8c960e234e52c6867be7.scope - libcontainer container f462439fb73b2b0715538e3f0885f8e590d7c4930dab8c960e234e52c6867be7. Jan 17 00:16:45.992790 systemd[1]: Started cri-containerd-b24c47f8ae62a04c7a190eac290dac61c3ba9893f0efcc130aa024ac7bc8c4bb.scope - libcontainer container b24c47f8ae62a04c7a190eac290dac61c3ba9893f0efcc130aa024ac7bc8c4bb. Jan 17 00:16:46.133374 containerd[1481]: time="2026-01-17T00:16:46.128500197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:29988a9444d40c251f2061369746f5ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6c2706e2eb1cc5f9cec3a6864cdf5ff5f5a44586e1ab67fa45aede9fe120742\"" Jan 17 00:16:46.138603 kubelet[2213]: E0117 00:16:46.137290 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:46.161361 containerd[1481]: time="2026-01-17T00:16:46.161195552Z" level=info msg="CreateContainer within sandbox \"b6c2706e2eb1cc5f9cec3a6864cdf5ff5f5a44586e1ab67fa45aede9fe120742\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:16:46.202604 containerd[1481]: time="2026-01-17T00:16:46.202453105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f462439fb73b2b0715538e3f0885f8e590d7c4930dab8c960e234e52c6867be7\"" Jan 17 00:16:46.208466 kubelet[2213]: E0117 00:16:46.208436 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:46.221716 containerd[1481]: time="2026-01-17T00:16:46.220940249Z" level=info msg="CreateContainer within sandbox \"b6c2706e2eb1cc5f9cec3a6864cdf5ff5f5a44586e1ab67fa45aede9fe120742\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf0ffa549d38e0bb6cf885b3eb7feafc04577ea4ed73eb7d73d8032ceec76ac3\"" Jan 17 00:16:46.225512 containerd[1481]: time="2026-01-17T00:16:46.225468641Z" level=info msg="StartContainer for \"cf0ffa549d38e0bb6cf885b3eb7feafc04577ea4ed73eb7d73d8032ceec76ac3\"" Jan 17 00:16:46.233153 containerd[1481]: time="2026-01-17T00:16:46.231492685Z" level=info msg="CreateContainer within sandbox \"f462439fb73b2b0715538e3f0885f8e590d7c4930dab8c960e234e52c6867be7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:16:46.239424 containerd[1481]: time="2026-01-17T00:16:46.239110826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b24c47f8ae62a04c7a190eac290dac61c3ba9893f0efcc130aa024ac7bc8c4bb\"" Jan 17 00:16:46.246524 kubelet[2213]: E0117 00:16:46.245535 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:46.287188 containerd[1481]: time="2026-01-17T00:16:46.286272577Z" level=info msg="CreateContainer within sandbox \"b24c47f8ae62a04c7a190eac290dac61c3ba9893f0efcc130aa024ac7bc8c4bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:16:46.460637 kubelet[2213]: I0117 00:16:46.459363 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:16:46.460637 kubelet[2213]: E0117 00:16:46.460448 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 17 00:16:46.527363 systemd[1]: Started cri-containerd-cf0ffa549d38e0bb6cf885b3eb7feafc04577ea4ed73eb7d73d8032ceec76ac3.scope - libcontainer container cf0ffa549d38e0bb6cf885b3eb7feafc04577ea4ed73eb7d73d8032ceec76ac3. Jan 17 00:16:46.544853 containerd[1481]: time="2026-01-17T00:16:46.540294615Z" level=info msg="CreateContainer within sandbox \"f462439fb73b2b0715538e3f0885f8e590d7c4930dab8c960e234e52c6867be7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8471c87b5dbe0bd096ab351bee320351687129da3cb163d5d531139753733af1\"" Jan 17 00:16:46.545770 containerd[1481]: time="2026-01-17T00:16:46.545732913Z" level=info msg="StartContainer for \"8471c87b5dbe0bd096ab351bee320351687129da3cb163d5d531139753733af1\"" Jan 17 00:16:46.548888 containerd[1481]: time="2026-01-17T00:16:46.547491084Z" level=info msg="CreateContainer within sandbox \"b24c47f8ae62a04c7a190eac290dac61c3ba9893f0efcc130aa024ac7bc8c4bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26e6f070ed67a0a95dd62ef9f378877a8efc4b70fd98a07aa373546151e3dca4\"" Jan 17 00:16:46.556404 containerd[1481]: time="2026-01-17T00:16:46.556369233Z" level=info msg="StartContainer for \"26e6f070ed67a0a95dd62ef9f378877a8efc4b70fd98a07aa373546151e3dca4\"" Jan 17 00:16:46.731282 systemd[1]: run-containerd-runc-k8s.io-26e6f070ed67a0a95dd62ef9f378877a8efc4b70fd98a07aa373546151e3dca4-runc.SgIFgc.mount: Deactivated successfully. Jan 17 00:16:46.753326 containerd[1481]: time="2026-01-17T00:16:46.753098184Z" level=info msg="StartContainer for \"cf0ffa549d38e0bb6cf885b3eb7feafc04577ea4ed73eb7d73d8032ceec76ac3\" returns successfully" Jan 17 00:16:46.757556 systemd[1]: Started cri-containerd-8471c87b5dbe0bd096ab351bee320351687129da3cb163d5d531139753733af1.scope - libcontainer container 8471c87b5dbe0bd096ab351bee320351687129da3cb163d5d531139753733af1. Jan 17 00:16:46.772248 systemd[1]: Started cri-containerd-26e6f070ed67a0a95dd62ef9f378877a8efc4b70fd98a07aa373546151e3dca4.scope - libcontainer container 26e6f070ed67a0a95dd62ef9f378877a8efc4b70fd98a07aa373546151e3dca4. Jan 17 00:16:47.273096 kubelet[2213]: E0117 00:16:46.956615 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:16:47.317216 kubelet[2213]: E0117 00:16:47.311828 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:16:47.327574 kubelet[2213]: E0117 00:16:47.319284 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:16:47.386323 kubelet[2213]: E0117 00:16:47.383597 2213 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:16:47.706598 containerd[1481]: time="2026-01-17T00:16:47.705915612Z" level=info msg="StartContainer for \"26e6f070ed67a0a95dd62ef9f378877a8efc4b70fd98a07aa373546151e3dca4\" returns successfully" Jan 17 00:16:47.769837 containerd[1481]: time="2026-01-17T00:16:47.768790643Z" level=info msg="StartContainer for \"8471c87b5dbe0bd096ab351bee320351687129da3cb163d5d531139753733af1\" returns successfully" Jan 17 00:16:47.786224 kubelet[2213]: E0117 00:16:47.785322 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:47.786224 kubelet[2213]: E0117 00:16:47.785793 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:47.812035 kubelet[2213]: E0117 00:16:47.811923 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:47.813510 kubelet[2213]: E0117 00:16:47.813485 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:49.680035 kubelet[2213]: I0117 00:16:49.675622 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:16:49.687060 kubelet[2213]: E0117 00:16:49.683592 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:49.687060 kubelet[2213]: E0117 00:16:49.685277 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:49.687060 kubelet[2213]: E0117 00:16:49.685436 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:49.687060 kubelet[2213]: E0117 00:16:49.685822 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:49.688086 kubelet[2213]: E0117 00:16:49.688055 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:49.688408 kubelet[2213]: E0117 00:16:49.688383 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:50.680559 kubelet[2213]: E0117 00:16:50.680103 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:50.692218 kubelet[2213]: E0117 00:16:50.689545 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:50.693827 kubelet[2213]: E0117 00:16:50.693284 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:50.693827 kubelet[2213]: E0117 00:16:50.693630 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:52.401132 kubelet[2213]: E0117 00:16:52.397511 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:52.405218 kubelet[2213]: E0117 00:16:52.404567 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:53.271121 kubelet[2213]: E0117 00:16:53.267896 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:16:54.467917 kubelet[2213]: E0117 00:16:54.467298 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:54.467917 kubelet[2213]: E0117 00:16:54.467686 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:55.944492 kubelet[2213]: E0117 00:16:55.933915 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:16:55.944492 kubelet[2213]: E0117 00:16:55.935782 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:16:59.117487 kubelet[2213]: E0117 00:16:59.110478 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 17 00:16:59.251465 kubelet[2213]: E0117 00:16:59.251292 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:16:59.695098 kubelet[2213]: E0117 00:16:59.691169 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 17 00:17:01.774860 kubelet[2213]: I0117 00:17:01.769406 2213 apiserver.go:52] "Watching apiserver" Jan 17 00:17:01.914721 kubelet[2213]: I0117 00:17:01.913244 2213 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:17:02.225487 kubelet[2213]: E0117 00:17:02.217947 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5c846ce627a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:16:42.505512873 +0000 UTC m=+2.259585047,LastTimestamp:2026-01-17 00:16:42.505512873 +0000 UTC m=+2.259585047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:17:02.586823 kubelet[2213]: E0117 00:17:02.536688 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5c8473936282 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:16:42.617528962 +0000 UTC m=+2.371601095,LastTimestamp:2026-01-17 00:16:42.617528962 +0000 UTC m=+2.371601095,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:17:02.720562 kubelet[2213]: E0117 00:17:02.720378 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:17:02.728821 kubelet[2213]: E0117 00:17:02.728784 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:02.815849 kubelet[2213]: E0117 00:17:02.810366 2213 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5c848915ce79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:16:42.978397817 +0000 UTC m=+2.732469951,LastTimestamp:2026-01-17 00:16:42.978397817 +0000 UTC m=+2.732469951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:17:03.297017 kubelet[2213]: E0117 00:17:03.295666 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:17:03.369661 kubelet[2213]: E0117 00:17:03.369580 2213 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 00:17:04.700257 kubelet[2213]: E0117 00:17:04.695405 2213 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 00:17:04.736021 kubelet[2213]: E0117 00:17:04.733191 2213 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:17:04.736021 kubelet[2213]: E0117 00:17:04.733546 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:05.428891 kubelet[2213]: E0117 00:17:05.428423 2213 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 00:17:05.734381 kubelet[2213]: E0117 00:17:05.733231 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 00:17:06.119594 kubelet[2213]: I0117 00:17:06.117872 2213 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:17:06.173733 kubelet[2213]: I0117 00:17:06.172800 2213 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:17:06.173733 kubelet[2213]: E0117 00:17:06.172876 2213 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 17 00:17:06.273538 kubelet[2213]: I0117 00:17:06.272923 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:17:06.354570 kubelet[2213]: I0117 00:17:06.354085 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:17:06.371250 kubelet[2213]: E0117 00:17:06.371059 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:06.474069 kubelet[2213]: I0117 00:17:06.470853 2213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:17:06.478054 kubelet[2213]: E0117 00:17:06.477937 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:06.520315 kubelet[2213]: E0117 00:17:06.517458 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:14.073633 kubelet[2213]: I0117 00:17:14.071584 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.071520638 podStartE2EDuration="8.071520638s" podCreationTimestamp="2026-01-17 00:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:14.064806499 +0000 UTC m=+33.818878652" watchObservedRunningTime="2026-01-17 00:17:14.071520638 +0000 UTC m=+33.825592792" Jan 17 00:17:14.129161 kubelet[2213]: I0117 00:17:14.128692 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.128577443 podStartE2EDuration="8.128577443s" podCreationTimestamp="2026-01-17 00:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:14.127592895 +0000 UTC m=+33.881665058" watchObservedRunningTime="2026-01-17 00:17:14.128577443 +0000 UTC m=+33.882649576" Jan 17 00:17:14.380618 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-5.scope)... Jan 17 00:17:14.380639 systemd[1]: Reloading... Jan 17 00:17:14.634534 kubelet[2213]: E0117 00:17:14.625502 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:14.725420 kubelet[2213]: I0117 00:17:14.725299 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.725276103 podStartE2EDuration="8.725276103s" podCreationTimestamp="2026-01-17 00:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:14.203218061 +0000 UTC m=+33.957290204" watchObservedRunningTime="2026-01-17 00:17:14.725276103 +0000 UTC m=+34.479348257" Jan 17 00:17:15.063540 zram_generator::config[2556]: No configuration found. Jan 17 00:17:15.923325 kubelet[2213]: E0117 00:17:15.922195 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:16.362327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:16.682409 systemd[1]: Reloading finished in 2300 ms. Jan 17 00:17:16.867644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:16.909332 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:17:16.909713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:16.909830 systemd[1]: kubelet.service: Consumed 8.807s CPU time, 135.2M memory peak, 0B memory swap peak. Jan 17 00:17:16.950901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:18.539113 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:17:18.541246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:19.321080 kubelet[2592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:19.321080 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:17:19.321080 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:19.321080 kubelet[2592]: I0117 00:17:19.316654 2592 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:17:19.380538 kubelet[2592]: I0117 00:17:19.378379 2592 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:17:19.380538 kubelet[2592]: I0117 00:17:19.378422 2592 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:17:19.386030 kubelet[2592]: I0117 00:17:19.384213 2592 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:17:19.388447 kubelet[2592]: I0117 00:17:19.388357 2592 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:17:19.413337 kubelet[2592]: I0117 00:17:19.413170 2592 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:17:19.445699 kubelet[2592]: E0117 00:17:19.444385 2592 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:17:19.445699 kubelet[2592]: I0117 00:17:19.444544 2592 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:17:19.535327 kubelet[2592]: I0117 00:17:19.530577 2592 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:17:19.540198 kubelet[2592]: I0117 00:17:19.538638 2592 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:17:19.540198 kubelet[2592]: I0117 00:17:19.538917 2592 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:17:19.540198 kubelet[2592]: I0117 00:17:19.539439 2592 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:17:19.540198 kubelet[2592]: I0117 00:17:19.539451 2592 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:17:19.540198 kubelet[2592]: I0117 00:17:19.539687 2592 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:19.554419 kubelet[2592]: I0117 00:17:19.542044 2592 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:17:19.554419 kubelet[2592]: I0117 00:17:19.542088 2592 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:17:19.554419 kubelet[2592]: I0117 00:17:19.542173 2592 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:17:19.554419 kubelet[2592]: I0117 00:17:19.542233 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:17:19.603937 kubelet[2592]: I0117 00:17:19.601403 2592 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:17:19.615739 kubelet[2592]: I0117 00:17:19.606115 2592 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:17:19.723127 kubelet[2592]: I0117 00:17:19.723045 2592 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:17:19.723300 kubelet[2592]: I0117 00:17:19.723169 2592 server.go:1289] "Started kubelet" Jan 17 00:17:19.724160 kubelet[2592]: I0117 00:17:19.724071 2592 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:17:19.730209 kubelet[2592]: I0117 00:17:19.730067 2592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:17:19.730950 kubelet[2592]: I0117 00:17:19.730549 2592 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:17:19.732668 kubelet[2592]: I0117 00:17:19.730713 2592 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:17:19.761075 kubelet[2592]: E0117 00:17:19.760863 2592 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:17:19.767326 kubelet[2592]: I0117 00:17:19.767123 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:17:19.792699 kubelet[2592]: I0117 00:17:19.765095 2592 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:17:19.817010 kubelet[2592]: I0117 00:17:19.793531 2592 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:17:19.817010 kubelet[2592]: I0117 00:17:19.793548 2592 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:17:19.817010 kubelet[2592]: I0117 00:17:19.813313 2592 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:17:19.825524 kubelet[2592]: I0117 00:17:19.825401 2592 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:17:19.830860 kubelet[2592]: I0117 00:17:19.830088 2592 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:17:19.830860 kubelet[2592]: I0117 00:17:19.830139 2592 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:17:20.222460 kubelet[2592]: I0117 00:17:20.220277 2592 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:17:20.226308 kubelet[2592]: I0117 00:17:20.226182 2592 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:17:20.226308 kubelet[2592]: I0117 00:17:20.226241 2592 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:17:20.226545 kubelet[2592]: I0117 00:17:20.226353 2592 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:17:20.226545 kubelet[2592]: I0117 00:17:20.226364 2592 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:17:20.226545 kubelet[2592]: E0117 00:17:20.226417 2592 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:17:20.432914 kubelet[2592]: E0117 00:17:20.431508 2592 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:17:20.552084 kubelet[2592]: I0117 00:17:20.550883 2592 apiserver.go:52] "Watching apiserver" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588082 2592 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588127 2592 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588154 2592 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588531 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588572 2592 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588599 2592 policy_none.go:49] "None policy: Start" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588615 2592 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588630 2592 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:17:20.589034 kubelet[2592]: I0117 00:17:20.588757 2592 state_mem.go:75] "Updated machine memory state" Jan 17 00:17:20.636057 kubelet[2592]: E0117 00:17:20.634297 2592 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:17:20.641416 kubelet[2592]: E0117 00:17:20.641318 2592 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:17:20.642229 kubelet[2592]: I0117 00:17:20.641608 2592 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:17:20.642229 kubelet[2592]: I0117 00:17:20.641626 2592 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:17:20.663512 kubelet[2592]: I0117 00:17:20.650433 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:17:20.688298 kubelet[2592]: E0117 00:17:20.681167 2592 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:17:20.701022 kubelet[2592]: I0117 00:17:20.692736 2592 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:17:20.701022 kubelet[2592]: I0117 00:17:20.700271 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:17:20.701202 containerd[1481]: time="2026-01-17T00:17:20.699731312Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:17:20.989045 kubelet[2592]: I0117 00:17:20.984209 2592 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:17:21.200937 kubelet[2592]: I0117 00:17:21.198234 2592 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:17:21.200937 kubelet[2592]: I0117 00:17:21.200061 2592 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:17:21.217409 kubelet[2592]: I0117 00:17:21.215488 2592 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:17:21.228074 kubelet[2592]: I0117 00:17:21.227653 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29988a9444d40c251f2061369746f5ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"29988a9444d40c251f2061369746f5ec\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:17:21.228074 kubelet[2592]: I0117 00:17:21.227704 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29988a9444d40c251f2061369746f5ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"29988a9444d40c251f2061369746f5ec\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:17:21.228074 kubelet[2592]: I0117 00:17:21.227735 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:17:21.228074 kubelet[2592]: I0117 00:17:21.227760 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:17:21.228138 systemd[1]: Created slice kubepods-besteffort-pod7ab3623e_1947_4cdd_849f_79ab19ea38d9.slice - libcontainer container kubepods-besteffort-pod7ab3623e_1947_4cdd_849f_79ab19ea38d9.slice. Jan 17 00:17:21.238935 kubelet[2592]: I0117 00:17:21.238397 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:17:21.238935 kubelet[2592]: I0117 00:17:21.238449 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ab3623e-1947-4cdd-849f-79ab19ea38d9-kube-proxy\") pod \"kube-proxy-v54j4\" (UID: \"7ab3623e-1947-4cdd-849f-79ab19ea38d9\") " pod="kube-system/kube-proxy-v54j4" Jan 17 00:17:21.238935 kubelet[2592]: I0117 00:17:21.238474 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ab3623e-1947-4cdd-849f-79ab19ea38d9-xtables-lock\") pod \"kube-proxy-v54j4\" (UID: \"7ab3623e-1947-4cdd-849f-79ab19ea38d9\") " pod="kube-system/kube-proxy-v54j4" Jan 17 00:17:21.238935 kubelet[2592]: I0117 00:17:21.238501 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhdfc\" (UniqueName: \"kubernetes.io/projected/7ab3623e-1947-4cdd-849f-79ab19ea38d9-kube-api-access-mhdfc\") pod \"kube-proxy-v54j4\" (UID: \"7ab3623e-1947-4cdd-849f-79ab19ea38d9\") " pod="kube-system/kube-proxy-v54j4" Jan 17 00:17:21.238935 kubelet[2592]: I0117 00:17:21.238523 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29988a9444d40c251f2061369746f5ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"29988a9444d40c251f2061369746f5ec\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:17:21.239368 kubelet[2592]: I0117 00:17:21.238552 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:17:21.239368 kubelet[2592]: I0117 00:17:21.238574 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:17:21.239368 kubelet[2592]: I0117 00:17:21.238601 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:17:21.239368 kubelet[2592]: I0117 00:17:21.238630 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ab3623e-1947-4cdd-849f-79ab19ea38d9-lib-modules\") pod \"kube-proxy-v54j4\" (UID: \"7ab3623e-1947-4cdd-849f-79ab19ea38d9\") " pod="kube-system/kube-proxy-v54j4" Jan 17 00:17:21.570849 kubelet[2592]: E0117 00:17:21.510226 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:21.570849 kubelet[2592]: E0117 00:17:21.511253 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:21.585367 kubelet[2592]: E0117 00:17:21.577838 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:21.612224 kubelet[2592]: E0117 00:17:21.612055 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:21.612885 kubelet[2592]: E0117 00:17:21.612729 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:21.691415 kubelet[2592]: E0117 00:17:21.691121 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:21.727897 containerd[1481]: time="2026-01-17T00:17:21.721662543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v54j4,Uid:7ab3623e-1947-4cdd-849f-79ab19ea38d9,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:22.122708 containerd[1481]: time="2026-01-17T00:17:22.122060830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:22.122708 containerd[1481]: time="2026-01-17T00:17:22.122201081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:22.122708 containerd[1481]: time="2026-01-17T00:17:22.122227952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:22.122708 containerd[1481]: time="2026-01-17T00:17:22.122433275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:22.283235 systemd[1]: Started cri-containerd-c9a9a149c83ba4984c645eb5d8529e04925347d1d1049a17993e2f31054781b6.scope - libcontainer container c9a9a149c83ba4984c645eb5d8529e04925347d1d1049a17993e2f31054781b6. Jan 17 00:17:22.400766 containerd[1481]: time="2026-01-17T00:17:22.397757975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v54j4,Uid:7ab3623e-1947-4cdd-849f-79ab19ea38d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a9a149c83ba4984c645eb5d8529e04925347d1d1049a17993e2f31054781b6\"" Jan 17 00:17:22.403295 kubelet[2592]: E0117 00:17:22.399456 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:22.415353 containerd[1481]: time="2026-01-17T00:17:22.414902708Z" level=info msg="CreateContainer within sandbox \"c9a9a149c83ba4984c645eb5d8529e04925347d1d1049a17993e2f31054781b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:17:22.593363 containerd[1481]: time="2026-01-17T00:17:22.592668864Z" level=info msg="CreateContainer within sandbox \"c9a9a149c83ba4984c645eb5d8529e04925347d1d1049a17993e2f31054781b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a4e9fa63cae0eec54d529c0fece1f1c5047bb4708df6f95808ce4566d04581a\"" Jan 17 00:17:22.599389 containerd[1481]: time="2026-01-17T00:17:22.598267270Z" level=info msg="StartContainer for \"6a4e9fa63cae0eec54d529c0fece1f1c5047bb4708df6f95808ce4566d04581a\"" Jan 17 00:17:22.621653 kubelet[2592]: E0117 00:17:22.621466 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:22.631126 kubelet[2592]: E0117 00:17:22.626853 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:22.631126 kubelet[2592]: E0117 00:17:22.629578 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:22.758611 systemd[1]: Started cri-containerd-6a4e9fa63cae0eec54d529c0fece1f1c5047bb4708df6f95808ce4566d04581a.scope - libcontainer container 6a4e9fa63cae0eec54d529c0fece1f1c5047bb4708df6f95808ce4566d04581a. Jan 17 00:17:23.010744 containerd[1481]: time="2026-01-17T00:17:23.010564087Z" level=info msg="StartContainer for \"6a4e9fa63cae0eec54d529c0fece1f1c5047bb4708df6f95808ce4566d04581a\" returns successfully" Jan 17 00:17:23.626685 kubelet[2592]: E0117 00:17:23.626633 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:23.628328 kubelet[2592]: E0117 00:17:23.628236 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:24.717372 kubelet[2592]: E0117 00:17:24.716728 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:25.722890 kubelet[2592]: I0117 00:17:25.722342 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v54j4" podStartSLOduration=5.722320816 podStartE2EDuration="5.722320816s" podCreationTimestamp="2026-01-17 00:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:23.712406622 +0000 UTC m=+5.125009162" watchObservedRunningTime="2026-01-17 00:17:25.722320816 +0000 UTC m=+7.134923176" Jan 17 00:17:25.920590 kubelet[2592]: I0117 00:17:25.920403 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31247b4b-3e59-45bd-aff6-c7d6de1013f5-xtables-lock\") pod \"kube-flannel-ds-khbh6\" (UID: \"31247b4b-3e59-45bd-aff6-c7d6de1013f5\") " pod="kube-flannel/kube-flannel-ds-khbh6" Jan 17 00:17:25.920590 kubelet[2592]: I0117 00:17:25.920492 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhlt5\" (UniqueName: \"kubernetes.io/projected/31247b4b-3e59-45bd-aff6-c7d6de1013f5-kube-api-access-lhlt5\") pod \"kube-flannel-ds-khbh6\" (UID: \"31247b4b-3e59-45bd-aff6-c7d6de1013f5\") " pod="kube-flannel/kube-flannel-ds-khbh6" Jan 17 00:17:25.920590 kubelet[2592]: I0117 00:17:25.920529 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31247b4b-3e59-45bd-aff6-c7d6de1013f5-run\") pod \"kube-flannel-ds-khbh6\" (UID: \"31247b4b-3e59-45bd-aff6-c7d6de1013f5\") " pod="kube-flannel/kube-flannel-ds-khbh6" Jan 17 00:17:25.920590 kubelet[2592]: I0117 00:17:25.920551 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/31247b4b-3e59-45bd-aff6-c7d6de1013f5-cni-plugin\") pod \"kube-flannel-ds-khbh6\" (UID: \"31247b4b-3e59-45bd-aff6-c7d6de1013f5\") " pod="kube-flannel/kube-flannel-ds-khbh6" Jan 17 00:17:25.920590 kubelet[2592]: I0117 00:17:25.920571 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/31247b4b-3e59-45bd-aff6-c7d6de1013f5-cni\") pod \"kube-flannel-ds-khbh6\" (UID: \"31247b4b-3e59-45bd-aff6-c7d6de1013f5\") " pod="kube-flannel/kube-flannel-ds-khbh6" Jan 17 00:17:25.922289 kubelet[2592]: I0117 00:17:25.920654 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/31247b4b-3e59-45bd-aff6-c7d6de1013f5-flannel-cfg\") pod \"kube-flannel-ds-khbh6\" (UID: \"31247b4b-3e59-45bd-aff6-c7d6de1013f5\") " pod="kube-flannel/kube-flannel-ds-khbh6" Jan 17 00:17:25.992586 systemd[1]: Created slice kubepods-burstable-pod31247b4b_3e59_45bd_aff6_c7d6de1013f5.slice - libcontainer container kubepods-burstable-pod31247b4b_3e59_45bd_aff6_c7d6de1013f5.slice. Jan 17 00:17:26.082515 kubelet[2592]: E0117 00:17:26.082421 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:26.292592 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 17 00:17:26.324323 kubelet[2592]: E0117 00:17:26.324195 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:26.336158 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:26.341195 containerd[1481]: time="2026-01-17T00:17:26.340794798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-khbh6,Uid:31247b4b-3e59-45bd-aff6-c7d6de1013f5,Namespace:kube-flannel,Attempt:0,}" Jan 17 00:17:26.346220 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:34148.service: Deactivated successfully. Jan 17 00:17:26.365029 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:17:26.365550 systemd[1]: session-5.scope: Consumed 21.812s CPU time, 166.1M memory peak, 0B memory swap peak. Jan 17 00:17:26.372440 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:17:26.374304 systemd-logind[1466]: Removed session 5. Jan 17 00:17:26.620574 containerd[1481]: time="2026-01-17T00:17:26.618745844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:26.620574 containerd[1481]: time="2026-01-17T00:17:26.618896867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:26.620574 containerd[1481]: time="2026-01-17T00:17:26.618911454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:26.620574 containerd[1481]: time="2026-01-17T00:17:26.619208689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:26.723420 systemd[1]: Started cri-containerd-d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41.scope - libcontainer container d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41. Jan 17 00:17:26.728225 kubelet[2592]: E0117 00:17:26.724609 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:26.912188 kubelet[2592]: E0117 00:17:26.912057 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:27.103106 containerd[1481]: time="2026-01-17T00:17:27.095461780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-khbh6,Uid:31247b4b-3e59-45bd-aff6-c7d6de1013f5,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\"" Jan 17 00:17:27.103418 kubelet[2592]: E0117 00:17:27.102685 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:27.114877 containerd[1481]: time="2026-01-17T00:17:27.112932770Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 17 00:17:27.839069 kubelet[2592]: E0117 00:17:27.835054 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:29.034997 kubelet[2592]: E0117 00:17:29.028696 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:29.804477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877624292.mount: Deactivated successfully. Jan 17 00:17:29.888739 update_engine[1471]: I20260117 00:17:29.879515 1471 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:17:29.888739 update_engine[1471]: I20260117 00:17:29.884513 1471 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:17:29.940323 update_engine[1471]: I20260117 00:17:29.938643 1471 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:17:29.940323 update_engine[1471]: I20260117 00:17:29.940104 1471 omaha_request_params.cc:62] Current group set to lts Jan 17 00:17:30.016177 kubelet[2592]: E0117 00:17:29.999538 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:30.016350 update_engine[1471]: I20260117 00:17:30.005045 1471 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:17:30.016350 update_engine[1471]: I20260117 00:17:30.005182 1471 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:17:30.016350 update_engine[1471]: I20260117 00:17:30.005323 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:17:30.016924 update_engine[1471]: I20260117 00:17:30.016885 1471 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:17:30.017508 update_engine[1471]: I20260117 00:17:30.017477 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:17:30.017582 update_engine[1471]: I20260117 00:17:30.017563 1471 omaha_request_action.cc:272] Request: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.017582 update_engine[1471]: Jan 17 00:17:30.018194 update_engine[1471]: I20260117 00:17:30.018086 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:17:30.025129 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:17:30.094300 update_engine[1471]: I20260117 00:17:30.093134 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:17:30.094300 update_engine[1471]: I20260117 00:17:30.093762 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:17:30.123913 update_engine[1471]: E20260117 00:17:30.117746 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:17:30.123913 update_engine[1471]: I20260117 00:17:30.117933 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:17:30.390036 containerd[1481]: time="2026-01-17T00:17:30.388948965Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:30.404400 containerd[1481]: time="2026-01-17T00:17:30.401712422Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 17 00:17:30.415175 containerd[1481]: time="2026-01-17T00:17:30.414426246Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:30.434534 containerd[1481]: time="2026-01-17T00:17:30.432364658Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.318666188s" Jan 17 00:17:30.434534 containerd[1481]: time="2026-01-17T00:17:30.432421826Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 17 00:17:30.434534 containerd[1481]: time="2026-01-17T00:17:30.433477036Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:30.474403 containerd[1481]: time="2026-01-17T00:17:30.473280162Z" level=info msg="CreateContainer within sandbox \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 00:17:30.629299 containerd[1481]: time="2026-01-17T00:17:30.619532815Z" level=info msg="CreateContainer within sandbox \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add\"" Jan 17 00:17:30.629299 containerd[1481]: time="2026-01-17T00:17:30.623354022Z" level=info msg="StartContainer for \"2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add\"" Jan 17 00:17:30.841236 systemd[1]: Started cri-containerd-2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add.scope - libcontainer container 2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add. Jan 17 00:17:31.071243 containerd[1481]: time="2026-01-17T00:17:31.071012402Z" level=info msg="StartContainer for \"2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add\" returns successfully" Jan 17 00:17:31.072330 systemd[1]: cri-containerd-2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add.scope: Deactivated successfully. Jan 17 00:17:31.506531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add-rootfs.mount: Deactivated successfully. Jan 17 00:17:31.658876 containerd[1481]: time="2026-01-17T00:17:31.648589843Z" level=info msg="shim disconnected" id=2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add namespace=k8s.io Jan 17 00:17:31.658876 containerd[1481]: time="2026-01-17T00:17:31.650378525Z" level=warning msg="cleaning up after shim disconnected" id=2b23a95ce775813ac49584aa722a88d05c8cdac77c36306349e2e4bc14fe1add namespace=k8s.io Jan 17 00:17:31.658876 containerd[1481]: time="2026-01-17T00:17:31.650407188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:17:31.999362 kubelet[2592]: E0117 00:17:31.998254 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:32.001759 containerd[1481]: time="2026-01-17T00:17:32.001723071Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 17 00:17:40.865123 update_engine[1471]: I20260117 00:17:40.861732 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:17:40.876699 update_engine[1471]: I20260117 00:17:40.874362 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:17:40.876699 update_engine[1471]: I20260117 00:17:40.876223 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:17:40.902273 update_engine[1471]: E20260117 00:17:40.902112 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:17:40.902273 update_engine[1471]: I20260117 00:17:40.902230 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:17:42.205245 containerd[1481]: time="2026-01-17T00:17:42.204369040Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:42.213040 containerd[1481]: time="2026-01-17T00:17:42.210699603Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 17 00:17:42.215878 containerd[1481]: time="2026-01-17T00:17:42.215344688Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:42.240016 containerd[1481]: time="2026-01-17T00:17:42.239701065Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:42.249345 containerd[1481]: time="2026-01-17T00:17:42.245526064Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 10.2434436s" Jan 17 00:17:42.249345 containerd[1481]: time="2026-01-17T00:17:42.245572801Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 17 00:17:42.288573 containerd[1481]: time="2026-01-17T00:17:42.284405172Z" level=info msg="CreateContainer within sandbox \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:17:42.360641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157666212.mount: Deactivated successfully. Jan 17 00:17:42.367619 containerd[1481]: time="2026-01-17T00:17:42.366722373Z" level=info msg="CreateContainer within sandbox \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922\"" Jan 17 00:17:42.371498 containerd[1481]: time="2026-01-17T00:17:42.369180604Z" level=info msg="StartContainer for \"210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922\"" Jan 17 00:17:42.761747 systemd[1]: Started cri-containerd-210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922.scope - libcontainer container 210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922. Jan 17 00:17:43.207548 systemd[1]: cri-containerd-210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922.scope: Deactivated successfully. Jan 17 00:17:43.228743 containerd[1481]: time="2026-01-17T00:17:43.225293440Z" level=info msg="StartContainer for \"210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922\" returns successfully" Jan 17 00:17:43.301823 kubelet[2592]: I0117 00:17:43.300816 2592 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:17:43.358789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922-rootfs.mount: Deactivated successfully. Jan 17 00:17:43.524666 containerd[1481]: time="2026-01-17T00:17:43.522893614Z" level=info msg="shim disconnected" id=210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922 namespace=k8s.io Jan 17 00:17:43.524666 containerd[1481]: time="2026-01-17T00:17:43.523490038Z" level=warning msg="cleaning up after shim disconnected" id=210aa637d03dd46203fc497e92e5b448dc1a09c59db0fa302570544ac66cd922 namespace=k8s.io Jan 17 00:17:43.524666 containerd[1481]: time="2026-01-17T00:17:43.523506789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:17:43.579307 systemd[1]: Created slice kubepods-burstable-pod3b5d6e0e_c647_44da_9e16_fe0354f5e14e.slice - libcontainer container kubepods-burstable-pod3b5d6e0e_c647_44da_9e16_fe0354f5e14e.slice. Jan 17 00:17:43.601652 systemd[1]: Created slice kubepods-burstable-podc9a87cba_bdab_409e_b285_cad9b7da0214.slice - libcontainer container kubepods-burstable-podc9a87cba_bdab_409e_b285_cad9b7da0214.slice. Jan 17 00:17:43.655624 containerd[1481]: time="2026-01-17T00:17:43.652331179Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:17:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:17:43.687504 kubelet[2592]: I0117 00:17:43.686921 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b5d6e0e-c647-44da-9e16-fe0354f5e14e-config-volume\") pod \"coredns-674b8bbfcf-cwtls\" (UID: \"3b5d6e0e-c647-44da-9e16-fe0354f5e14e\") " pod="kube-system/coredns-674b8bbfcf-cwtls" Jan 17 00:17:43.687504 kubelet[2592]: I0117 00:17:43.687136 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzsn\" (UniqueName: \"kubernetes.io/projected/c9a87cba-bdab-409e-b285-cad9b7da0214-kube-api-access-qvzsn\") pod \"coredns-674b8bbfcf-6tbp6\" (UID: \"c9a87cba-bdab-409e-b285-cad9b7da0214\") " pod="kube-system/coredns-674b8bbfcf-6tbp6" Jan 17 00:17:43.687504 kubelet[2592]: I0117 00:17:43.687180 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tp9\" (UniqueName: \"kubernetes.io/projected/3b5d6e0e-c647-44da-9e16-fe0354f5e14e-kube-api-access-h6tp9\") pod \"coredns-674b8bbfcf-cwtls\" (UID: \"3b5d6e0e-c647-44da-9e16-fe0354f5e14e\") " pod="kube-system/coredns-674b8bbfcf-cwtls" Jan 17 00:17:43.687504 kubelet[2592]: I0117 00:17:43.687217 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9a87cba-bdab-409e-b285-cad9b7da0214-config-volume\") pod \"coredns-674b8bbfcf-6tbp6\" (UID: \"c9a87cba-bdab-409e-b285-cad9b7da0214\") " pod="kube-system/coredns-674b8bbfcf-6tbp6" Jan 17 00:17:43.901063 kubelet[2592]: E0117 00:17:43.900950 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:43.926776 kubelet[2592]: E0117 00:17:43.916325 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:43.927411 containerd[1481]: time="2026-01-17T00:17:43.919376720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cwtls,Uid:3b5d6e0e-c647-44da-9e16-fe0354f5e14e,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:43.927411 containerd[1481]: time="2026-01-17T00:17:43.923751540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6tbp6,Uid:c9a87cba-bdab-409e-b285-cad9b7da0214,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:44.102353 kubelet[2592]: E0117 00:17:44.100252 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:44.124334 containerd[1481]: time="2026-01-17T00:17:44.123949665Z" level=info msg="CreateContainer within sandbox \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 00:17:44.238101 containerd[1481]: time="2026-01-17T00:17:44.237784060Z" level=info msg="CreateContainer within sandbox \"d0815b94928db279874390e75b7f1d026a34a7bbadd243c2a08e303ad12a6c41\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"92fe717c6ff10504c86d14fda2cf4bf22d76986791753b59b96c0fb1afc035cb\"" Jan 17 00:17:44.241233 containerd[1481]: time="2026-01-17T00:17:44.241197860Z" level=info msg="StartContainer for \"92fe717c6ff10504c86d14fda2cf4bf22d76986791753b59b96c0fb1afc035cb\"" Jan 17 00:17:44.381402 containerd[1481]: time="2026-01-17T00:17:44.381192363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6tbp6,Uid:c9a87cba-bdab-409e-b285-cad9b7da0214,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd0fdac708a59a456e5570ccc55b5a272bad3831661006a03f6a64b09b24a8f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:17:44.386533 kubelet[2592]: E0117 00:17:44.386180 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0fdac708a59a456e5570ccc55b5a272bad3831661006a03f6a64b09b24a8f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:17:44.387227 kubelet[2592]: E0117 00:17:44.386703 2592 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0fdac708a59a456e5570ccc55b5a272bad3831661006a03f6a64b09b24a8f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6tbp6" Jan 17 00:17:44.387227 kubelet[2592]: E0117 00:17:44.386780 2592 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0fdac708a59a456e5570ccc55b5a272bad3831661006a03f6a64b09b24a8f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6tbp6" Jan 17 00:17:44.387750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd0fdac708a59a456e5570ccc55b5a272bad3831661006a03f6a64b09b24a8f1-shm.mount: Deactivated successfully. Jan 17 00:17:44.390347 kubelet[2592]: E0117 00:17:44.388787 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a3fb5983b3855ecc5a0b8ff0e73ad158e6c4db6f5a21dfd0ffcd5a9d55758b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:17:44.390347 kubelet[2592]: E0117 00:17:44.388930 2592 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a3fb5983b3855ecc5a0b8ff0e73ad158e6c4db6f5a21dfd0ffcd5a9d55758b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-cwtls" Jan 17 00:17:44.390347 kubelet[2592]: E0117 00:17:44.389022 2592 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a3fb5983b3855ecc5a0b8ff0e73ad158e6c4db6f5a21dfd0ffcd5a9d55758b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-cwtls" Jan 17 00:17:44.390347 kubelet[2592]: E0117 00:17:44.389204 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cwtls_kube-system(3b5d6e0e-c647-44da-9e16-fe0354f5e14e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cwtls_kube-system(3b5d6e0e-c647-44da-9e16-fe0354f5e14e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a3fb5983b3855ecc5a0b8ff0e73ad158e6c4db6f5a21dfd0ffcd5a9d55758b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-cwtls" podUID="3b5d6e0e-c647-44da-9e16-fe0354f5e14e" Jan 17 00:17:44.390634 containerd[1481]: time="2026-01-17T00:17:44.388331791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cwtls,Uid:3b5d6e0e-c647-44da-9e16-fe0354f5e14e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52a3fb5983b3855ecc5a0b8ff0e73ad158e6c4db6f5a21dfd0ffcd5a9d55758b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 00:17:44.388611 systemd[1]: run-netns-cni\x2d2262ab71\x2d3f29\x2d3383\x2d1154\x2dcc8d288a36ca.mount: Deactivated successfully. Jan 17 00:17:44.390757 kubelet[2592]: E0117 00:17:44.389297 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6tbp6_kube-system(c9a87cba-bdab-409e-b285-cad9b7da0214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6tbp6_kube-system(c9a87cba-bdab-409e-b285-cad9b7da0214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd0fdac708a59a456e5570ccc55b5a272bad3831661006a03f6a64b09b24a8f1\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-6tbp6" podUID="c9a87cba-bdab-409e-b285-cad9b7da0214" Jan 17 00:17:44.388729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a3fb5983b3855ecc5a0b8ff0e73ad158e6c4db6f5a21dfd0ffcd5a9d55758b-shm.mount: Deactivated successfully. Jan 17 00:17:44.541206 systemd[1]: Started cri-containerd-92fe717c6ff10504c86d14fda2cf4bf22d76986791753b59b96c0fb1afc035cb.scope - libcontainer container 92fe717c6ff10504c86d14fda2cf4bf22d76986791753b59b96c0fb1afc035cb. Jan 17 00:17:44.788045 containerd[1481]: time="2026-01-17T00:17:44.785934006Z" level=info msg="StartContainer for \"92fe717c6ff10504c86d14fda2cf4bf22d76986791753b59b96c0fb1afc035cb\" returns successfully" Jan 17 00:17:45.220444 kubelet[2592]: E0117 00:17:45.218884 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:46.067069 systemd-networkd[1401]: flannel.1: Link UP Jan 17 00:17:46.067085 systemd-networkd[1401]: flannel.1: Gained carrier Jan 17 00:17:46.222033 kubelet[2592]: E0117 00:17:46.220645 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:47.884500 systemd-networkd[1401]: flannel.1: Gained IPv6LL Jan 17 00:17:50.862646 update_engine[1471]: I20260117 00:17:50.861717 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:17:50.865745 update_engine[1471]: I20260117 00:17:50.863759 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:17:50.865745 update_engine[1471]: I20260117 00:17:50.865546 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:17:50.895056 update_engine[1471]: E20260117 00:17:50.892208 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:17:50.895056 update_engine[1471]: I20260117 00:17:50.894121 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:17:58.277182 kubelet[2592]: E0117 00:17:58.273639 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:58.291057 containerd[1481]: time="2026-01-17T00:17:58.290595625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6tbp6,Uid:c9a87cba-bdab-409e-b285-cad9b7da0214,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:58.430156 systemd-networkd[1401]: cni0: Link UP Jan 17 00:17:58.430164 systemd-networkd[1401]: cni0: Gained carrier Jan 17 00:17:58.437264 systemd-networkd[1401]: cni0: Lost carrier Jan 17 00:17:58.468556 systemd-networkd[1401]: vethde252f1b: Link UP Jan 17 00:17:58.480226 kernel: cni0: port 1(vethde252f1b) entered blocking state Jan 17 00:17:58.480353 kernel: cni0: port 1(vethde252f1b) entered disabled state Jan 17 00:17:58.484075 kernel: vethde252f1b: entered allmulticast mode Jan 17 00:17:58.484163 kernel: vethde252f1b: entered promiscuous mode Jan 17 00:17:58.497034 kernel: cni0: port 1(vethde252f1b) entered blocking state Jan 17 00:17:58.497147 kernel: cni0: port 1(vethde252f1b) entered forwarding state Jan 17 00:17:58.505703 kernel: cni0: port 1(vethde252f1b) entered disabled state Jan 17 00:17:58.519468 kernel: cni0: port 1(vethde252f1b) entered blocking state Jan 17 00:17:58.519566 kernel: cni0: port 1(vethde252f1b) entered forwarding state Jan 17 00:17:58.516631 systemd-networkd[1401]: vethde252f1b: Gained carrier Jan 17 00:17:58.521325 systemd-networkd[1401]: cni0: Gained carrier Jan 17 00:17:58.529427 containerd[1481]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Jan 17 00:17:58.529427 containerd[1481]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:17:58.628153 containerd[1481]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-17T00:17:58.627761979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:58.628618 containerd[1481]: time="2026-01-17T00:17:58.628167146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:58.628618 containerd[1481]: time="2026-01-17T00:17:58.628198374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:58.628618 containerd[1481]: time="2026-01-17T00:17:58.628463398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:58.791283 systemd[1]: Started cri-containerd-87eb59b04fa93ab10cdb6edf632e6a8333e672a4054c69cdb0ce60d8abc7236e.scope - libcontainer container 87eb59b04fa93ab10cdb6edf632e6a8333e672a4054c69cdb0ce60d8abc7236e. Jan 17 00:17:58.828913 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:17:58.910096 containerd[1481]: time="2026-01-17T00:17:58.907209320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6tbp6,Uid:c9a87cba-bdab-409e-b285-cad9b7da0214,Namespace:kube-system,Attempt:0,} returns sandbox id \"87eb59b04fa93ab10cdb6edf632e6a8333e672a4054c69cdb0ce60d8abc7236e\"" Jan 17 00:17:58.917554 kubelet[2592]: E0117 00:17:58.915784 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:58.975819 containerd[1481]: time="2026-01-17T00:17:58.975606460Z" level=info msg="CreateContainer within sandbox \"87eb59b04fa93ab10cdb6edf632e6a8333e672a4054c69cdb0ce60d8abc7236e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:17:59.088157 containerd[1481]: time="2026-01-17T00:17:59.086369744Z" level=info msg="CreateContainer within sandbox \"87eb59b04fa93ab10cdb6edf632e6a8333e672a4054c69cdb0ce60d8abc7236e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c65d45baf1157525ff0a7f4f2ee23a3958c02c38df9048ae16c59e5c081b078\"" Jan 17 00:17:59.093042 containerd[1481]: time="2026-01-17T00:17:59.090195137Z" level=info msg="StartContainer for \"2c65d45baf1157525ff0a7f4f2ee23a3958c02c38df9048ae16c59e5c081b078\"" Jan 17 00:17:59.188635 systemd[1]: Started cri-containerd-2c65d45baf1157525ff0a7f4f2ee23a3958c02c38df9048ae16c59e5c081b078.scope - libcontainer container 2c65d45baf1157525ff0a7f4f2ee23a3958c02c38df9048ae16c59e5c081b078. Jan 17 00:17:59.230377 kubelet[2592]: E0117 00:17:59.229029 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:59.231771 containerd[1481]: time="2026-01-17T00:17:59.231721562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cwtls,Uid:3b5d6e0e-c647-44da-9e16-fe0354f5e14e,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:59.333821 containerd[1481]: time="2026-01-17T00:17:59.333342156Z" level=info msg="StartContainer for \"2c65d45baf1157525ff0a7f4f2ee23a3958c02c38df9048ae16c59e5c081b078\" returns successfully" Jan 17 00:17:59.458343 systemd-networkd[1401]: veth412733f8: Link UP Jan 17 00:17:59.488144 kernel: cni0: port 2(veth412733f8) entered blocking state Jan 17 00:17:59.488310 kernel: cni0: port 2(veth412733f8) entered disabled state Jan 17 00:17:59.488350 kernel: veth412733f8: entered allmulticast mode Jan 17 00:17:59.505756 kernel: veth412733f8: entered promiscuous mode Jan 17 00:17:59.560593 kernel: cni0: port 2(veth412733f8) entered blocking state Jan 17 00:17:59.561722 kernel: cni0: port 2(veth412733f8) entered forwarding state Jan 17 00:17:59.560809 systemd-networkd[1401]: veth412733f8: Gained carrier Jan 17 00:17:59.575036 containerd[1481]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 17 00:17:59.575036 containerd[1481]: delegateAdd: netconf sent to delegate plugin: Jan 17 00:17:59.676116 containerd[1481]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-17T00:17:59.675531147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:59.676116 containerd[1481]: time="2026-01-17T00:17:59.675623711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:59.676116 containerd[1481]: time="2026-01-17T00:17:59.675638779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:59.676116 containerd[1481]: time="2026-01-17T00:17:59.675756308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:59.699744 kubelet[2592]: E0117 00:17:59.699621 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:17:59.745709 kubelet[2592]: I0117 00:17:59.745327 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-khbh6" podStartSLOduration=19.577534651 podStartE2EDuration="34.745306104s" podCreationTimestamp="2026-01-17 00:17:25 +0000 UTC" firstStartedPulling="2026-01-17 00:17:27.104632377 +0000 UTC m=+8.517234717" lastFinishedPulling="2026-01-17 00:17:42.27240383 +0000 UTC m=+23.685006170" observedRunningTime="2026-01-17 00:17:45.326804369 +0000 UTC m=+26.739406719" watchObservedRunningTime="2026-01-17 00:17:59.745306104 +0000 UTC m=+41.157908464" Jan 17 00:17:59.748134 systemd[1]: Started cri-containerd-d1e5cd96cb241d4085a98a1365d2b10518d462aea8483f82d8eb75f46cdeeaa1.scope - libcontainer container d1e5cd96cb241d4085a98a1365d2b10518d462aea8483f82d8eb75f46cdeeaa1. Jan 17 00:17:59.799747 kubelet[2592]: I0117 00:17:59.793850 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6tbp6" podStartSLOduration=40.793829868 podStartE2EDuration="40.793829868s" podCreationTimestamp="2026-01-17 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:59.744397856 +0000 UTC m=+41.157000196" watchObservedRunningTime="2026-01-17 00:17:59.793829868 +0000 UTC m=+41.206432208" Jan 17 00:17:59.820808 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:17:59.982195 containerd[1481]: time="2026-01-17T00:17:59.982108344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cwtls,Uid:3b5d6e0e-c647-44da-9e16-fe0354f5e14e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e5cd96cb241d4085a98a1365d2b10518d462aea8483f82d8eb75f46cdeeaa1\"" Jan 17 00:17:59.987901 kubelet[2592]: E0117 00:17:59.984571 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:00.051485 containerd[1481]: time="2026-01-17T00:18:00.051333034Z" level=info msg="CreateContainer within sandbox \"d1e5cd96cb241d4085a98a1365d2b10518d462aea8483f82d8eb75f46cdeeaa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:18:00.115851 containerd[1481]: time="2026-01-17T00:18:00.114509207Z" level=info msg="CreateContainer within sandbox \"d1e5cd96cb241d4085a98a1365d2b10518d462aea8483f82d8eb75f46cdeeaa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b9151446780502b4a968f86597c85c420f654547cd3c781ccd8e1e92928a855\"" Jan 17 00:18:00.122045 containerd[1481]: time="2026-01-17T00:18:00.120196248Z" level=info msg="StartContainer for \"2b9151446780502b4a968f86597c85c420f654547cd3c781ccd8e1e92928a855\"" Jan 17 00:18:00.227463 systemd-networkd[1401]: vethde252f1b: Gained IPv6LL Jan 17 00:18:00.272387 systemd[1]: Started cri-containerd-2b9151446780502b4a968f86597c85c420f654547cd3c781ccd8e1e92928a855.scope - libcontainer container 2b9151446780502b4a968f86597c85c420f654547cd3c781ccd8e1e92928a855. Jan 17 00:18:00.474083 systemd-networkd[1401]: cni0: Gained IPv6LL Jan 17 00:18:00.476038 containerd[1481]: time="2026-01-17T00:18:00.475744313Z" level=info msg="StartContainer for \"2b9151446780502b4a968f86597c85c420f654547cd3c781ccd8e1e92928a855\" returns successfully" Jan 17 00:18:00.728121 kubelet[2592]: E0117 00:18:00.726212 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:00.743028 kubelet[2592]: E0117 00:18:00.742432 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:00.879543 update_engine[1471]: I20260117 00:18:00.863845 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:18:00.896138 update_engine[1471]: I20260117 00:18:00.894781 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:18:00.896138 update_engine[1471]: I20260117 00:18:00.895522 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:18:00.905342 kubelet[2592]: I0117 00:18:00.904458 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cwtls" podStartSLOduration=41.904438109 podStartE2EDuration="41.904438109s" podCreationTimestamp="2026-01-17 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:18:00.903655883 +0000 UTC m=+42.316258224" watchObservedRunningTime="2026-01-17 00:18:00.904438109 +0000 UTC m=+42.317040459" Jan 17 00:18:00.919757 update_engine[1471]: E20260117 00:18:00.919097 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:18:00.919757 update_engine[1471]: I20260117 00:18:00.919250 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:18:00.919757 update_engine[1471]: I20260117 00:18:00.919271 1471 omaha_request_action.cc:617] Omaha request response: Jan 17 00:18:00.919757 update_engine[1471]: E20260117 00:18:00.919530 1471 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:18:00.919757 update_engine[1471]: I20260117 00:18:00.919613 1471 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:18:00.919757 update_engine[1471]: I20260117 00:18:00.919631 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:18:00.919757 update_engine[1471]: I20260117 00:18:00.919641 1471 update_attempter.cc:306] Processing Done. Jan 17 00:18:00.920333 update_engine[1471]: E20260117 00:18:00.919778 1471 update_attempter.cc:619] Update failed. Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.919797 1471 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.919807 1471 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.919818 1471 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.920051 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.920090 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.920102 1471 omaha_request_action.cc:272] Request: Jan 17 00:18:00.920333 update_engine[1471]: Jan 17 00:18:00.920333 update_engine[1471]: Jan 17 00:18:00.920333 update_engine[1471]: Jan 17 00:18:00.920333 update_engine[1471]: Jan 17 00:18:00.920333 update_engine[1471]: Jan 17 00:18:00.920333 update_engine[1471]: Jan 17 00:18:00.920333 update_engine[1471]: I20260117 00:18:00.920114 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:18:00.920741 update_engine[1471]: I20260117 00:18:00.920418 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:18:00.920741 update_engine[1471]: I20260117 00:18:00.920686 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:18:00.921555 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:18:00.950176 update_engine[1471]: E20260117 00:18:00.946182 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946303 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946319 1471 omaha_request_action.cc:617] Omaha request response: Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946332 1471 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946341 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946350 1471 update_attempter.cc:306] Processing Done. Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946363 1471 update_attempter.cc:310] Error event sent. Jan 17 00:18:00.950176 update_engine[1471]: I20260117 00:18:00.946380 1471 update_check_scheduler.cc:74] Next update check in 45m47s Jan 17 00:18:00.962166 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:18:01.380792 systemd-networkd[1401]: veth412733f8: Gained IPv6LL Jan 17 00:18:01.807010 kubelet[2592]: E0117 00:18:01.804320 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:27.232448 kubelet[2592]: E0117 00:18:27.231662 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:28.229177 kubelet[2592]: E0117 00:18:28.229136 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:42.232540 kubelet[2592]: E0117 00:18:42.231416 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:51.250732 kubelet[2592]: E0117 00:18:51.244609 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:18:55.685887 kubelet[2592]: E0117 00:18:55.627848 2592 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.324s" Jan 17 00:18:56.749854 kubelet[2592]: E0117 00:18:56.735210 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:19:03.239289 kubelet[2592]: E0117 00:19:03.237414 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:19:08.783320 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:51596.service - OpenSSH per-connection server daemon (10.0.0.1:51596). Jan 17 00:19:09.071139 sshd[3800]: Accepted publickey for core from 10.0.0.1 port 51596 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:09.083554 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:09.138024 systemd-logind[1466]: New session 6 of user core. Jan 17 00:19:09.166110 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:19:09.752274 sshd[3800]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:09.764182 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:51596.service: Deactivated successfully. Jan 17 00:19:09.770445 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:19:09.774878 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:19:09.785513 systemd-logind[1466]: Removed session 6. Jan 17 00:19:13.608149 kubelet[2592]: E0117 00:19:13.601928 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:19:14.929637 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:54788.service - OpenSSH per-connection server daemon (10.0.0.1:54788). Jan 17 00:19:15.137109 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 54788 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:15.141938 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:15.290026 systemd-logind[1466]: New session 7 of user core. Jan 17 00:19:15.307589 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:19:26.007440 kubelet[2592]: E0117 00:19:26.007234 2592 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.554s" Jan 17 00:19:26.010850 sshd[3840]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:26.052736 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:54788.service: Deactivated successfully. Jan 17 00:19:26.060840 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:19:26.064718 systemd[1]: session-7.scope: Consumed 3.027s CPU time. Jan 17 00:19:26.075273 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:19:26.078303 systemd-logind[1466]: Removed session 7. Jan 17 00:19:31.038835 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:44708.service - OpenSSH per-connection server daemon (10.0.0.1:44708). Jan 17 00:19:31.160045 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 44708 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:31.166654 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:31.187659 systemd-logind[1466]: New session 8 of user core. Jan 17 00:19:31.193764 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:19:31.659200 sshd[3889]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:31.685408 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:44708.service: Deactivated successfully. Jan 17 00:19:31.691633 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:19:31.699688 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:19:31.705643 systemd-logind[1466]: Removed session 8. Jan 17 00:19:36.718567 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:45548.service - OpenSSH per-connection server daemon (10.0.0.1:45548). Jan 17 00:19:36.836453 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 45548 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:36.839259 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:36.873313 systemd-logind[1466]: New session 9 of user core. Jan 17 00:19:36.902404 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:19:37.210179 sshd[3939]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:37.216669 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:45548.service: Deactivated successfully. Jan 17 00:19:37.224167 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:19:37.236167 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:19:37.238296 systemd-logind[1466]: Removed session 9. Jan 17 00:19:38.230391 kubelet[2592]: E0117 00:19:38.228168 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:19:42.313671 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:45564.service - OpenSSH per-connection server daemon (10.0.0.1:45564). Jan 17 00:19:42.411202 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 45564 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:42.414927 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:42.441649 systemd-logind[1466]: New session 10 of user core. Jan 17 00:19:42.456333 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:19:42.752342 sshd[3975]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:42.769290 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:45564.service: Deactivated successfully. Jan 17 00:19:42.776556 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:19:42.782535 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:19:42.789055 systemd-logind[1466]: Removed session 10. Jan 17 00:19:47.822333 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:44448.service - OpenSSH per-connection server daemon (10.0.0.1:44448). Jan 17 00:19:47.932102 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 44448 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:47.936894 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:48.002050 systemd-logind[1466]: New session 11 of user core. Jan 17 00:19:48.006350 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:19:48.584627 sshd[4011]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:48.592233 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:44448.service: Deactivated successfully. Jan 17 00:19:48.600247 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:19:48.605108 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:19:48.612594 systemd-logind[1466]: Removed session 11. Jan 17 00:19:49.236034 kubelet[2592]: E0117 00:19:49.234093 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:19:53.625401 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:59322.service - OpenSSH per-connection server daemon (10.0.0.1:59322). Jan 17 00:19:53.741781 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 59322 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:53.750605 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:53.792122 systemd-logind[1466]: New session 12 of user core. Jan 17 00:19:53.809893 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:19:54.258302 sshd[4047]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:54.283125 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:59322.service: Deactivated successfully. Jan 17 00:19:54.289052 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:19:54.292524 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:19:54.294380 systemd-logind[1466]: Removed session 12. Jan 17 00:19:56.234476 kubelet[2592]: E0117 00:19:56.234114 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:19:59.376413 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Jan 17 00:19:59.522944 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:19:59.532719 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:59.590359 systemd-logind[1466]: New session 13 of user core. Jan 17 00:19:59.607611 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:20:00.112697 sshd[4088]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:00.144435 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:59334.service: Deactivated successfully. Jan 17 00:20:00.177745 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:20:00.183323 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:20:00.229354 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:59340.service - OpenSSH per-connection server daemon (10.0.0.1:59340). Jan 17 00:20:00.231348 systemd-logind[1466]: Removed session 13. Jan 17 00:20:00.471746 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 59340 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:00.477303 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:00.504465 systemd-logind[1466]: New session 14 of user core. Jan 17 00:20:00.542822 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:20:01.453837 sshd[4104]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:01.500757 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:59340.service: Deactivated successfully. Jan 17 00:20:01.504315 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:20:01.531164 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:20:01.573362 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). Jan 17 00:20:01.589440 systemd-logind[1466]: Removed session 14. Jan 17 00:20:01.678190 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:01.681086 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:01.692316 systemd-logind[1466]: New session 15 of user core. Jan 17 00:20:01.702489 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:20:02.138454 sshd[4123]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:02.180693 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:59348.service: Deactivated successfully. Jan 17 00:20:02.188703 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:20:02.200046 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:20:02.206848 systemd-logind[1466]: Removed session 15. Jan 17 00:20:07.178535 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:47282.service - OpenSSH per-connection server daemon (10.0.0.1:47282). Jan 17 00:20:07.259443 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 47282 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:07.260422 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:07.274926 systemd-logind[1466]: New session 16 of user core. Jan 17 00:20:07.284244 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:20:07.602745 sshd[4157]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:07.620162 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:47282.service: Deactivated successfully. Jan 17 00:20:07.622728 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:20:07.630139 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:20:07.640431 systemd-logind[1466]: Removed session 16. Jan 17 00:20:12.236483 kubelet[2592]: E0117 00:20:12.235217 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:20:12.676551 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:47180.service - OpenSSH per-connection server daemon (10.0.0.1:47180). Jan 17 00:20:12.806286 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 47180 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:12.810287 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:12.847362 systemd-logind[1466]: New session 17 of user core. Jan 17 00:20:12.889235 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:20:13.303175 sshd[4192]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:13.312642 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:47180.service: Deactivated successfully. Jan 17 00:20:13.319520 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:20:13.325628 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:20:13.332218 systemd-logind[1466]: Removed session 17. Jan 17 00:20:18.328630 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:47182.service - OpenSSH per-connection server daemon (10.0.0.1:47182). Jan 17 00:20:18.514381 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 47182 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:18.528408 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:18.601622 systemd-logind[1466]: New session 18 of user core. Jan 17 00:20:18.622109 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:20:19.198941 sshd[4226]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:19.208805 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:47182.service: Deactivated successfully. Jan 17 00:20:19.211658 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:20:19.230366 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:20:19.232077 systemd-logind[1466]: Removed session 18. Jan 17 00:20:22.517105 kubelet[2592]: E0117 00:20:22.514075 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:20:25.374578 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:55486.service - OpenSSH per-connection server daemon (10.0.0.1:55486). Jan 17 00:20:25.488204 kubelet[2592]: E0117 00:20:25.480676 2592 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.239s" Jan 17 00:20:25.502820 kubelet[2592]: E0117 00:20:25.502360 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:20:25.936560 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 55486 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:25.968081 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:25.990070 systemd-logind[1466]: New session 19 of user core. Jan 17 00:20:26.010171 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:20:26.540765 sshd[4275]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:26.589103 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:55486.service: Deactivated successfully. Jan 17 00:20:26.604448 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:20:26.611693 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:20:26.633478 systemd-logind[1466]: Removed session 19. Jan 17 00:20:29.231364 kubelet[2592]: E0117 00:20:29.230830 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:20:31.619849 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:55490.service - OpenSSH per-connection server daemon (10.0.0.1:55490). Jan 17 00:20:31.738720 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 55490 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:31.745763 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:31.798449 systemd-logind[1466]: New session 20 of user core. Jan 17 00:20:31.812467 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:20:32.299264 sshd[4313]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:32.309996 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:55490.service: Deactivated successfully. Jan 17 00:20:32.319120 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:20:32.321799 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:20:32.335478 systemd-logind[1466]: Removed session 20. Jan 17 00:20:37.805430 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:45726.service - OpenSSH per-connection server daemon (10.0.0.1:45726). Jan 17 00:20:39.122505 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 45726 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:39.211277 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:39.441162 systemd-logind[1466]: New session 21 of user core. Jan 17 00:20:39.597866 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:20:40.512345 kubelet[2592]: E0117 00:20:40.500722 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:20:41.145561 sshd[4348]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:41.182283 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:45726.service: Deactivated successfully. Jan 17 00:20:41.227615 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:20:41.272670 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:20:41.283056 systemd-logind[1466]: Removed session 21. Jan 17 00:20:46.184227 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:46174.service - OpenSSH per-connection server daemon (10.0.0.1:46174). Jan 17 00:20:46.300191 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 46174 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:46.298430 sshd[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:46.332519 systemd-logind[1466]: New session 22 of user core. Jan 17 00:20:46.349080 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:20:46.702531 sshd[4389]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:46.713490 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:46174.service: Deactivated successfully. Jan 17 00:20:46.716302 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:20:46.720436 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:20:46.742362 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:46184.service - OpenSSH per-connection server daemon (10.0.0.1:46184). Jan 17 00:20:46.743584 systemd-logind[1466]: Removed session 22. Jan 17 00:20:46.838265 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 46184 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:46.842172 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:46.865926 systemd-logind[1466]: New session 23 of user core. Jan 17 00:20:46.874495 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:20:47.886834 sshd[4418]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:47.907077 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:46184.service: Deactivated successfully. Jan 17 00:20:47.910949 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:20:47.920036 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:20:47.937577 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:46190.service - OpenSSH per-connection server daemon (10.0.0.1:46190). Jan 17 00:20:47.963189 systemd-logind[1466]: Removed session 23. Jan 17 00:20:48.037374 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 46190 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:48.039199 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:48.060826 systemd-logind[1466]: New session 24 of user core. Jan 17 00:20:48.067542 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:20:52.682800 sshd[4430]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:52.720833 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:46190.service: Deactivated successfully. Jan 17 00:20:52.728475 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:20:52.729452 systemd[1]: session-24.scope: Consumed 2.197s CPU time. Jan 17 00:20:52.735880 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:20:52.791071 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:53106.service - OpenSSH per-connection server daemon (10.0.0.1:53106). Jan 17 00:20:52.823263 systemd-logind[1466]: Removed session 24. Jan 17 00:20:52.952341 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 53106 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:52.959014 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:52.975469 systemd-logind[1466]: New session 25 of user core. Jan 17 00:20:52.990281 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:20:56.037425 sshd[4473]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:56.100380 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:53106.service: Deactivated successfully. Jan 17 00:20:56.110130 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:20:56.110461 systemd[1]: session-25.scope: Consumed 1.651s CPU time. Jan 17 00:20:56.114781 systemd-logind[1466]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:20:56.131617 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:53116.service - OpenSSH per-connection server daemon (10.0.0.1:53116). Jan 17 00:20:56.135463 systemd-logind[1466]: Removed session 25. Jan 17 00:20:56.205738 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 53116 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:20:56.211698 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:20:56.234715 systemd-logind[1466]: New session 26 of user core. Jan 17 00:20:56.248812 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:20:56.570472 sshd[4493]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:56.589828 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:53116.service: Deactivated successfully. Jan 17 00:20:56.600369 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:20:56.612389 systemd-logind[1466]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:20:56.619361 systemd-logind[1466]: Removed session 26. Jan 17 00:21:01.830049 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:53128.service - OpenSSH per-connection server daemon (10.0.0.1:53128). Jan 17 00:21:02.176836 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 53128 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:02.187850 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:02.208201 systemd-logind[1466]: New session 27 of user core. Jan 17 00:21:02.221629 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:21:02.671742 sshd[4528]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:02.683761 systemd-logind[1466]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:21:02.694455 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:53128.service: Deactivated successfully. Jan 17 00:21:02.715256 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:21:02.730100 systemd-logind[1466]: Removed session 27. Jan 17 00:21:07.708441 kubelet[2592]: E0117 00:21:07.706890 2592 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.435s" Jan 17 00:21:07.780553 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:51674.service - OpenSSH per-connection server daemon (10.0.0.1:51674). Jan 17 00:21:07.922198 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 51674 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:07.931582 sshd[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:07.978188 systemd-logind[1466]: New session 28 of user core. Jan 17 00:21:08.008111 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:21:08.638375 sshd[4564]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:08.663073 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:51674.service: Deactivated successfully. Jan 17 00:21:08.667725 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:21:08.669125 systemd-logind[1466]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:21:08.671186 systemd-logind[1466]: Removed session 28. Jan 17 00:21:13.688868 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:46228.service - OpenSSH per-connection server daemon (10.0.0.1:46228). Jan 17 00:21:13.809908 sshd[4610]: Accepted publickey for core from 10.0.0.1 port 46228 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:13.822550 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:13.838430 systemd-logind[1466]: New session 29 of user core. Jan 17 00:21:13.853754 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:21:14.155593 sshd[4610]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:14.207068 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:46228.service: Deactivated successfully. Jan 17 00:21:14.381144 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:21:14.390183 systemd-logind[1466]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:21:14.392486 systemd-logind[1466]: Removed session 29. Jan 17 00:21:16.255200 kubelet[2592]: E0117 00:21:16.254942 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:21:17.231030 kubelet[2592]: E0117 00:21:17.229626 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:21:19.288301 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:46234.service - OpenSSH per-connection server daemon (10.0.0.1:46234). Jan 17 00:21:19.419611 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 46234 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:19.421369 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:19.471759 systemd-logind[1466]: New session 30 of user core. Jan 17 00:21:19.491804 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:21:19.829748 sshd[4645]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:19.845650 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:46234.service: Deactivated successfully. Jan 17 00:21:19.857686 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:21:19.860571 systemd-logind[1466]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:21:19.865146 systemd-logind[1466]: Removed session 30. Jan 17 00:21:23.235209 kubelet[2592]: E0117 00:21:23.235159 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:21:24.912479 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:51508.service - OpenSSH per-connection server daemon (10.0.0.1:51508). Jan 17 00:21:24.985592 sshd[4684]: Accepted publickey for core from 10.0.0.1 port 51508 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:24.993087 sshd[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:25.023418 systemd-logind[1466]: New session 31 of user core. Jan 17 00:21:25.035451 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:21:25.336533 sshd[4684]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:25.347896 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:51508.service: Deactivated successfully. Jan 17 00:21:25.350366 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:21:25.357270 systemd-logind[1466]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:21:25.364778 systemd-logind[1466]: Removed session 31. Jan 17 00:21:28.234608 kubelet[2592]: E0117 00:21:28.234086 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:21:28.234608 kubelet[2592]: E0117 00:21:28.233838 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:21:30.401035 systemd[1]: Started sshd@31-10.0.0.16:22-10.0.0.1:51520.service - OpenSSH per-connection server daemon (10.0.0.1:51520). Jan 17 00:21:30.486289 sshd[4720]: Accepted publickey for core from 10.0.0.1 port 51520 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:30.490483 sshd[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:30.534321 systemd-logind[1466]: New session 32 of user core. Jan 17 00:21:30.566445 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:21:30.929644 sshd[4720]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:30.958493 systemd[1]: sshd@31-10.0.0.16:22-10.0.0.1:51520.service: Deactivated successfully. Jan 17 00:21:30.969336 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:21:30.980613 systemd-logind[1466]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:21:30.988496 systemd-logind[1466]: Removed session 32. Jan 17 00:21:35.975667 systemd[1]: Started sshd@32-10.0.0.16:22-10.0.0.1:33922.service - OpenSSH per-connection server daemon (10.0.0.1:33922). Jan 17 00:21:36.044099 sshd[4761]: Accepted publickey for core from 10.0.0.1 port 33922 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:36.046647 sshd[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:36.057418 systemd-logind[1466]: New session 33 of user core. Jan 17 00:21:36.072063 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:21:36.270376 sshd[4761]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:36.277171 systemd[1]: sshd@32-10.0.0.16:22-10.0.0.1:33922.service: Deactivated successfully. Jan 17 00:21:36.284644 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:21:36.287678 systemd-logind[1466]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:21:36.300761 systemd-logind[1466]: Removed session 33. Jan 17 00:21:41.389580 systemd[1]: Started sshd@33-10.0.0.16:22-10.0.0.1:33936.service - OpenSSH per-connection server daemon (10.0.0.1:33936). Jan 17 00:21:41.513547 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 33936 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:41.530382 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:41.592640 systemd-logind[1466]: New session 34 of user core. Jan 17 00:21:41.599388 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 17 00:21:42.116684 sshd[4796]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:42.137716 systemd[1]: sshd@33-10.0.0.16:22-10.0.0.1:33936.service: Deactivated successfully. Jan 17 00:21:42.144895 systemd[1]: session-34.scope: Deactivated successfully. Jan 17 00:21:42.151604 systemd-logind[1466]: Session 34 logged out. Waiting for processes to exit. Jan 17 00:21:42.162607 systemd-logind[1466]: Removed session 34. Jan 17 00:21:48.714239 systemd[1]: Started sshd@34-10.0.0.16:22-10.0.0.1:37096.service - OpenSSH per-connection server daemon (10.0.0.1:37096). Jan 17 00:21:48.729023 kubelet[2592]: E0117 00:21:48.728478 2592 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.203s" Jan 17 00:21:48.940863 sshd[4815]: Accepted publickey for core from 10.0.0.1 port 37096 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:21:48.958860 sshd[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:48.983368 systemd-logind[1466]: New session 35 of user core. Jan 17 00:21:48.997401 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 17 00:21:49.792506 sshd[4815]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:49.798580 systemd[1]: sshd@34-10.0.0.16:22-10.0.0.1:37096.service: Deactivated successfully. Jan 17 00:21:49.807248 systemd[1]: session-35.scope: Deactivated successfully. Jan 17 00:21:49.817569 systemd-logind[1466]: Session 35 logged out. Waiting for processes to exit. Jan 17 00:21:49.825399 systemd-logind[1466]: Removed session 35. Jan 17 00:21:51.234165 kubelet[2592]: E0117 00:21:51.233881 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"