Jan 20 00:32:13.261720 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:32:13.261749 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:13.261765 kernel: BIOS-provided physical RAM map: Jan 20 00:32:13.261775 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 00:32:13.261783 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 00:32:13.261792 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 00:32:13.261803 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 00:32:13.261812 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 00:32:13.261821 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:32:13.261833 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 00:32:13.261843 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 00:32:13.261852 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 00:32:13.261932 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 00:32:13.261945 kernel: NX (Execute Disable) protection: active Jan 20 00:32:13.261956 kernel: APIC: Static calls initialized Jan 20 00:32:13.261990 kernel: SMBIOS 2.8 present. Jan 20 00:32:13.262000 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 00:32:13.262010 kernel: Hypervisor detected: KVM Jan 20 00:32:13.262020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:32:13.262067 kernel: kvm-clock: using sched offset of 7657335165 cycles Jan 20 00:32:13.262078 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:32:13.262088 kernel: tsc: Detected 2445.424 MHz processor Jan 20 00:32:13.262098 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:32:13.262108 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:32:13.262124 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 00:32:13.262134 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 00:32:13.262144 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:32:13.262154 kernel: Using GB pages for direct mapping Jan 20 00:32:13.262164 kernel: ACPI: Early table checksum verification disabled Jan 20 00:32:13.262174 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 00:32:13.262184 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262194 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262204 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262218 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 00:32:13.262228 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262238 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262248 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262258 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:13.262268 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 00:32:13.262278 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 00:32:13.262294 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 00:32:13.262308 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 00:32:13.262319 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 00:32:13.262329 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 00:32:13.262340 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 00:32:13.262351 kernel: No NUMA configuration found Jan 20 00:32:13.262361 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 00:32:13.262376 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 20 00:32:13.262386 kernel: Zone ranges: Jan 20 00:32:13.262397 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:32:13.262407 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 00:32:13.262418 kernel: Normal empty Jan 20 00:32:13.262428 kernel: Movable zone start for each node Jan 20 00:32:13.262439 kernel: Early memory node ranges Jan 20 00:32:13.262449 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 00:32:13.262460 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 00:32:13.262470 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 00:32:13.262484 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:32:13.262511 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 00:32:13.262522 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 00:32:13.262532 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:32:13.262543 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:32:13.262554 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:32:13.262564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:32:13.262575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:32:13.262585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:32:13.262600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:32:13.262610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:32:13.262621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:32:13.262632 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:32:13.262642 kernel: TSC deadline timer available Jan 20 00:32:13.262652 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:32:13.262663 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:32:13.262674 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:32:13.262698 kernel: kvm-guest: setup PV sched yield Jan 20 00:32:13.262713 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 00:32:13.262724 kernel: Booting paravirtualized kernel on KVM Jan 20 00:32:13.262735 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:32:13.262745 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:32:13.262756 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:32:13.262766 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:32:13.262777 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:32:13.262787 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:32:13.262797 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:32:13.262813 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:13.262824 kernel: random: crng init done Jan 20 00:32:13.262834 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:32:13.262845 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:32:13.262856 kernel: Fallback order for Node 0: 0 Jan 20 00:32:13.262866 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 20 00:32:13.262951 kernel: Policy zone: DMA32 Jan 20 00:32:13.262963 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:32:13.262980 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 136884K reserved, 0K cma-reserved) Jan 20 00:32:13.262990 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:32:13.263001 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:32:13.263012 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:32:13.263022 kernel: Dynamic Preempt: voluntary Jan 20 00:32:13.263066 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:32:13.263083 kernel: rcu: RCU event tracing is enabled. Jan 20 00:32:13.263094 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:32:13.263105 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:32:13.263120 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:32:13.263130 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:32:13.263141 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:32:13.263151 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:32:13.263178 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:32:13.263189 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:32:13.263200 kernel: Console: colour VGA+ 80x25 Jan 20 00:32:13.263210 kernel: printk: console [ttyS0] enabled Jan 20 00:32:13.263221 kernel: ACPI: Core revision 20230628 Jan 20 00:32:13.263236 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:32:13.263246 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:32:13.263257 kernel: x2apic enabled Jan 20 00:32:13.263268 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:32:13.263278 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:32:13.263289 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:32:13.263299 kernel: kvm-guest: setup PV IPIs Jan 20 00:32:13.263310 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:32:13.263336 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:32:13.263348 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 00:32:13.263361 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:32:13.263372 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:32:13.263388 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:32:13.263399 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:32:13.263410 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:32:13.263421 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:32:13.263433 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:32:13.263447 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:32:13.263475 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:32:13.263487 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:32:13.263498 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:32:13.263529 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:32:13.263540 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:32:13.263551 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:32:13.263562 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:32:13.263577 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:32:13.263588 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:32:13.263599 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:32:13.263629 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:32:13.263641 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:32:13.263651 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:32:13.263662 kernel: landlock: Up and running. Jan 20 00:32:13.263672 kernel: SELinux: Initializing. Jan 20 00:32:13.263683 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:32:13.263698 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:32:13.263709 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:32:13.263720 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:13.263730 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:13.263741 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:13.263752 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:32:13.263763 kernel: signal: max sigframe size: 1776 Jan 20 00:32:13.263773 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:32:13.263800 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:32:13.263815 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:32:13.263826 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:32:13.263836 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:32:13.263847 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:32:13.263858 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:32:13.263868 kernel: smpboot: Max logical packages: 1 Jan 20 00:32:13.263919 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 00:32:13.263931 kernel: devtmpfs: initialized Jan 20 00:32:13.263942 kernel: x86/mm: Memory block size: 128MB Jan 20 00:32:13.263957 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:32:13.263968 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:32:13.263979 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:32:13.263989 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:32:13.264000 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:32:13.264013 kernel: audit: type=2000 audit(1768869130.699:1): state=initialized audit_enabled=0 res=1 Jan 20 00:32:13.264048 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:32:13.264060 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:32:13.264070 kernel: cpuidle: using governor menu Jan 20 00:32:13.264086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:32:13.264096 kernel: dca service started, version 1.12.1 Jan 20 00:32:13.264107 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:32:13.264118 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:32:13.264129 kernel: PCI: Using configuration type 1 for base access Jan 20 00:32:13.264140 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:32:13.264150 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:32:13.264161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:32:13.264171 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:32:13.264185 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:32:13.264196 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:32:13.264206 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:32:13.264217 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:32:13.264228 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:32:13.264238 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:32:13.264249 kernel: ACPI: Interpreter enabled Jan 20 00:32:13.264259 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:32:13.264270 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:32:13.264284 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:32:13.264298 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:32:13.264308 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:32:13.264319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:32:13.264651 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:32:13.264853 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:32:13.265133 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:32:13.265156 kernel: PCI host bridge to bus 0000:00 Jan 20 00:32:13.265341 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:32:13.265516 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:32:13.265683 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:32:13.265847 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:32:13.266095 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:32:13.266266 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 00:32:13.266474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:32:13.266775 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:32:13.267221 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:32:13.267468 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 20 00:32:13.267662 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 20 00:32:13.267847 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 20 00:32:13.268121 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:32:13.268357 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:32:13.268570 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 00:32:13.268758 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 20 00:32:13.268994 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 00:32:13.269237 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:32:13.269421 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 00:32:13.269610 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 20 00:32:13.269797 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 00:32:13.270322 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:32:13.270513 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 20 00:32:13.270719 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 20 00:32:13.270963 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 00:32:13.271189 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 20 00:32:13.271391 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:32:13.271572 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:32:13.271835 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:32:13.272106 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 20 00:32:13.272290 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 20 00:32:13.272479 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:32:13.272658 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 00:32:13.272678 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:32:13.272689 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:32:13.272700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:32:13.272711 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:32:13.272722 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:32:13.272732 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:32:13.272743 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:32:13.272754 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:32:13.272764 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:32:13.272780 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:32:13.272813 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:32:13.272823 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:32:13.272834 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:32:13.272844 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:32:13.272957 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:32:13.272971 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:32:13.272998 kernel: iommu: Default domain type: Translated Jan 20 00:32:13.273008 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:32:13.273050 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:32:13.273062 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:32:13.273073 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 00:32:13.273083 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 00:32:13.273424 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:32:13.273752 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:32:13.274007 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:32:13.274057 kernel: vgaarb: loaded Jan 20 00:32:13.274078 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:32:13.274089 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:32:13.274101 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:32:13.274112 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:32:13.274123 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:32:13.274134 kernel: pnp: PnP ACPI init Jan 20 00:32:13.274484 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:32:13.274506 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:32:13.274546 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:32:13.274557 kernel: NET: Registered PF_INET protocol family Jan 20 00:32:13.274569 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:32:13.274582 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:32:13.274594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:32:13.274606 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:32:13.274617 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:32:13.274629 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:32:13.274641 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:32:13.274657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:32:13.274668 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:32:13.274679 kernel: NET: Registered PF_XDP protocol family Jan 20 00:32:13.274873 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:32:13.275163 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:32:13.275348 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:32:13.275523 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:32:13.275696 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:32:13.275944 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 00:32:13.275962 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:32:13.275973 kernel: Initialise system trusted keyrings Jan 20 00:32:13.275984 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:32:13.275994 kernel: Key type asymmetric registered Jan 20 00:32:13.276005 kernel: Asymmetric key parser 'x509' registered Jan 20 00:32:13.276016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:32:13.276061 kernel: io scheduler mq-deadline registered Jan 20 00:32:13.276073 kernel: io scheduler kyber registered Jan 20 00:32:13.276091 kernel: io scheduler bfq registered Jan 20 00:32:13.276132 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:32:13.276164 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:32:13.276193 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:32:13.276206 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:32:13.276219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:32:13.276247 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:32:13.276275 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:32:13.276287 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:32:13.276302 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:32:13.276543 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:32:13.276564 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:32:13.276752 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:32:13.277002 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:32:12 UTC (1768869132) Jan 20 00:32:13.277229 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:32:13.277245 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:32:13.277258 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:32:13.277276 kernel: Segment Routing with IPv6 Jan 20 00:32:13.277287 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:32:13.277299 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:32:13.277311 kernel: Key type dns_resolver registered Jan 20 00:32:13.277322 kernel: IPI shorthand broadcast: enabled Jan 20 00:32:13.277334 kernel: sched_clock: Marking stable (1718017400, 466555520)->(2659625375, -475052455) Jan 20 00:32:13.277346 kernel: registered taskstats version 1 Jan 20 00:32:13.277357 kernel: Loading compiled-in X.509 certificates Jan 20 00:32:13.277369 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:32:13.277384 kernel: Key type .fscrypt registered Jan 20 00:32:13.277395 kernel: Key type fscrypt-provisioning registered Jan 20 00:32:13.277407 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:32:13.277418 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:32:13.277429 kernel: ima: No architecture policies found Jan 20 00:32:13.277438 kernel: clk: Disabling unused clocks Jan 20 00:32:13.277448 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:32:13.277458 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:32:13.277467 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:32:13.277482 kernel: Run /init as init process Jan 20 00:32:13.277491 kernel: with arguments: Jan 20 00:32:13.277501 kernel: /init Jan 20 00:32:13.277510 kernel: with environment: Jan 20 00:32:13.277519 kernel: HOME=/ Jan 20 00:32:13.277528 kernel: TERM=linux Jan 20 00:32:13.277540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:32:13.277552 systemd[1]: Detected virtualization kvm. Jan 20 00:32:13.277566 systemd[1]: Detected architecture x86-64. Jan 20 00:32:13.277575 systemd[1]: Running in initrd. Jan 20 00:32:13.277585 systemd[1]: No hostname configured, using default hostname. Jan 20 00:32:13.277595 systemd[1]: Hostname set to . Jan 20 00:32:13.277605 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:32:13.277615 kernel: hrtimer: interrupt took 3038014 ns Jan 20 00:32:13.277624 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:32:13.277635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:13.277649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:13.277719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:32:13.277732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:32:13.277798 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:32:13.277812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:32:13.277867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:32:13.277998 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:32:13.278020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:13.278093 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:13.278105 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:32:13.278116 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:32:13.278147 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:32:13.278164 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:32:13.278180 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:32:13.278193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:32:13.278206 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:32:13.278219 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:32:13.278232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:13.278245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:13.278258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:13.278271 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:32:13.278284 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:32:13.278301 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:32:13.278314 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:32:13.278326 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:32:13.278339 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:32:13.278352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:32:13.278364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:13.278377 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:32:13.278421 systemd-journald[195]: Collecting audit messages is disabled. Jan 20 00:32:13.278454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:13.278467 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:32:13.278485 systemd-journald[195]: Journal started Jan 20 00:32:13.278510 systemd-journald[195]: Runtime Journal (/run/log/journal/e8d188b9eaee46c9b489ad68167a32d4) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:32:13.274591 systemd-modules-load[196]: Inserted module 'overlay' Jan 20 00:32:13.429971 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:32:13.430007 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:32:13.430020 kernel: Bridge firewalling registered Jan 20 00:32:13.312761 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 20 00:32:13.442565 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:32:13.446450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:13.451983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:13.458091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:13.482256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:13.485256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:32:13.486698 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:32:13.504125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:32:13.512823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:13.521706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:13.530319 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:32:13.531935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:13.536966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:13.545526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:32:13.574007 dracut-cmdline[229]: dracut-dracut-053 Jan 20 00:32:13.582436 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:13.590777 systemd-resolved[232]: Positive Trust Anchors: Jan 20 00:32:13.590788 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:32:13.590815 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:32:13.594665 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 20 00:32:13.596827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:32:13.600279 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:13.727986 kernel: SCSI subsystem initialized Jan 20 00:32:13.737972 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:32:13.749974 kernel: iscsi: registered transport (tcp) Jan 20 00:32:13.772501 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:32:13.772582 kernel: QLogic iSCSI HBA Driver Jan 20 00:32:13.837743 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:32:13.852163 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:32:13.886400 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:32:13.886490 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:32:13.889063 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:32:13.942002 kernel: raid6: avx2x4 gen() 31732 MB/s Jan 20 00:32:13.959935 kernel: raid6: avx2x2 gen() 29040 MB/s Jan 20 00:32:13.980443 kernel: raid6: avx2x1 gen() 19702 MB/s Jan 20 00:32:13.980515 kernel: raid6: using algorithm avx2x4 gen() 31732 MB/s Jan 20 00:32:13.999832 kernel: raid6: .... xor() 5145 MB/s, rmw enabled Jan 20 00:32:13.999977 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:32:14.020967 kernel: xor: automatically using best checksumming function avx Jan 20 00:32:14.247413 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:32:14.266499 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:32:14.288117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:14.310303 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 20 00:32:14.319129 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:14.343294 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:32:14.363497 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 20 00:32:14.416297 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:32:14.437232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:32:14.540783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:14.554114 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:32:14.571377 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:32:14.578233 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:32:14.588964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:14.594988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:32:14.604951 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:32:14.608099 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:32:14.621008 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:32:14.624701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:32:14.627268 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:32:14.627367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:14.640582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:32:14.640599 kernel: GPT:9289727 != 19775487 Jan 20 00:32:14.640608 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:32:14.640618 kernel: GPT:9289727 != 19775487 Jan 20 00:32:14.640628 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:32:14.640637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:14.643076 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:14.649553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:14.652237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:14.657863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:14.670938 kernel: libata version 3.00 loaded. Jan 20 00:32:14.674593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:14.681458 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:32:14.693006 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:32:14.693939 kernel: AES CTR mode by8 optimization enabled Jan 20 00:32:14.693961 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:32:14.696956 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:32:14.707946 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:32:14.708304 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:32:14.712981 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jan 20 00:32:14.730963 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (471) Jan 20 00:32:14.732957 kernel: scsi host0: ahci Jan 20 00:32:14.733346 kernel: scsi host1: ahci Jan 20 00:32:14.733584 kernel: scsi host2: ahci Jan 20 00:32:14.733773 kernel: scsi host3: ahci Jan 20 00:32:14.734941 kernel: scsi host4: ahci Jan 20 00:32:14.735166 kernel: scsi host5: ahci Jan 20 00:32:14.735340 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 20 00:32:14.735352 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 20 00:32:14.735362 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 20 00:32:14.735377 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 20 00:32:14.735387 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 20 00:32:14.735396 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 20 00:32:14.737433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:32:14.871501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:14.893418 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:32:14.908434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:32:14.915220 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:32:14.918274 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:32:14.935199 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:32:14.939394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:14.949137 disk-uuid[555]: Primary Header is updated. Jan 20 00:32:14.949137 disk-uuid[555]: Secondary Entries is updated. Jan 20 00:32:14.949137 disk-uuid[555]: Secondary Header is updated. Jan 20 00:32:14.958871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:14.958939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:14.972639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:15.053476 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:15.053529 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:15.053916 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:15.055984 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:32:15.059930 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:15.061928 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:15.061957 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:32:15.065226 kernel: ata3.00: applying bridge limits Jan 20 00:32:15.066816 kernel: ata3.00: configured for UDMA/100 Jan 20 00:32:15.069936 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:32:15.118192 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:32:15.118493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:32:15.131936 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:32:15.998260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:15.999919 disk-uuid[556]: The operation has completed successfully. Jan 20 00:32:16.074855 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:32:16.089603 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:32:16.116372 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:32:16.129322 sh[594]: Success Jan 20 00:32:16.150968 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:32:16.226604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:32:16.245976 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:32:16.253716 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:32:16.313780 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:32:16.313856 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:16.313868 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:32:16.320938 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:32:16.320973 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:32:16.333234 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:32:16.342856 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:32:16.358325 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:32:16.362252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:32:16.395401 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:16.395485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:16.395498 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:16.404994 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:16.421374 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:32:16.428590 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:16.436996 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:32:16.453170 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:32:16.639463 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:32:16.647308 ignition[696]: Ignition 2.19.0 Jan 20 00:32:16.647331 ignition[696]: Stage: fetch-offline Jan 20 00:32:16.650243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:32:16.647400 ignition[696]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:16.647414 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:16.647574 ignition[696]: parsed url from cmdline: "" Jan 20 00:32:16.647579 ignition[696]: no config URL provided Jan 20 00:32:16.647586 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:32:16.647597 ignition[696]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:32:16.647632 ignition[696]: op(1): [started] loading QEMU firmware config module Jan 20 00:32:16.647638 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:32:16.676092 ignition[696]: op(1): [finished] loading QEMU firmware config module Jan 20 00:32:16.706012 systemd-networkd[780]: lo: Link UP Jan 20 00:32:16.706068 systemd-networkd[780]: lo: Gained carrier Jan 20 00:32:16.708466 systemd-networkd[780]: Enumeration completed Jan 20 00:32:16.709156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:32:16.709675 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:16.709683 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:32:16.711292 systemd-networkd[780]: eth0: Link UP Jan 20 00:32:16.711300 systemd-networkd[780]: eth0: Gained carrier Jan 20 00:32:16.711312 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:16.714568 systemd[1]: Reached target network.target - Network. Jan 20 00:32:16.748490 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:32:16.775726 ignition[696]: parsing config with SHA512: 47c735648263ba96bf2538189a2b110fae5149f46046ef6495f02fb482281d188b62bc8b6e8914a58d375999d8b78462bfb097f1ae339e78670f401900e3497a Jan 20 00:32:16.803941 unknown[696]: fetched base config from "system" Jan 20 00:32:16.804969 unknown[696]: fetched user config from "qemu" Jan 20 00:32:16.805605 ignition[696]: fetch-offline: fetch-offline passed Jan 20 00:32:16.808300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:32:16.805686 ignition[696]: Ignition finished successfully Jan 20 00:32:16.813499 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:32:16.829218 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:32:16.859745 ignition[786]: Ignition 2.19.0 Jan 20 00:32:16.859769 ignition[786]: Stage: kargs Jan 20 00:32:16.859986 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:16.860000 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:16.860675 ignition[786]: kargs: kargs passed Jan 20 00:32:16.860722 ignition[786]: Ignition finished successfully Jan 20 00:32:16.873839 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:32:16.890519 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:32:16.918000 ignition[794]: Ignition 2.19.0 Jan 20 00:32:16.918028 ignition[794]: Stage: disks Jan 20 00:32:16.921061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:32:16.918293 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:16.927290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:32:16.918318 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:16.933966 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:32:16.919323 ignition[794]: disks: disks passed Jan 20 00:32:16.938235 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:32:16.919375 ignition[794]: Ignition finished successfully Jan 20 00:32:16.940656 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:32:16.942769 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:32:16.962206 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:32:16.987352 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:32:16.993412 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:32:17.012072 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:32:17.133555 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:32:17.138865 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:32:17.144478 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:32:17.165150 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:32:17.169471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:32:17.190730 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jan 20 00:32:17.190765 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:17.175747 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:32:17.216604 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:17.216647 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:17.175824 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:32:17.231995 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:17.175986 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:32:17.218153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:32:17.233702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:32:17.254319 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:32:17.325594 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:32:17.334376 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:32:17.339666 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:32:17.345999 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:32:17.506759 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:32:17.527206 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:32:17.534670 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:32:17.542579 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:32:17.547418 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:17.572370 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:32:17.809696 systemd-networkd[780]: eth0: Gained IPv6LL Jan 20 00:32:17.836977 ignition[926]: INFO : Ignition 2.19.0 Jan 20 00:32:17.836977 ignition[926]: INFO : Stage: mount Jan 20 00:32:17.844457 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:17.844457 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:17.844457 ignition[926]: INFO : mount: mount passed Jan 20 00:32:17.844457 ignition[926]: INFO : Ignition finished successfully Jan 20 00:32:17.841624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:32:17.854161 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:32:17.866691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:32:17.897111 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jan 20 00:32:17.897167 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:17.897180 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:17.900997 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:17.907002 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:17.909159 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:32:17.957587 ignition[956]: INFO : Ignition 2.19.0 Jan 20 00:32:17.957587 ignition[956]: INFO : Stage: files Jan 20 00:32:17.962174 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:17.962174 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:17.962174 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:32:17.962174 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:32:17.962174 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:32:17.988985 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:32:17.988985 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:32:17.988985 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:32:17.988985 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:32:17.988985 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 00:32:17.969319 unknown[956]: wrote ssh authorized keys file for user: core Jan 20 00:32:18.103184 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:32:18.266592 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:32:18.266592 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:18.278583 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 00:32:18.580846 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 00:32:20.623237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:20.623237 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 00:32:20.634684 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:32:20.686418 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:32:20.686418 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:32:20.686418 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:32:20.686418 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:32:20.686418 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:32:20.686418 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:32:20.722028 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:32:20.722028 ignition[956]: INFO : files: files passed Jan 20 00:32:20.722028 ignition[956]: INFO : Ignition finished successfully Jan 20 00:32:20.688721 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:32:20.722170 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:32:20.732995 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:32:20.739370 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:32:20.764607 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:32:20.739515 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:32:20.775463 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:20.775463 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:20.754692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:32:20.789178 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:20.762292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:32:20.790279 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:32:20.820565 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:32:20.820773 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:32:20.827681 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:32:20.835438 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:32:20.839205 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:32:20.840558 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:32:20.866304 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:32:20.892147 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:32:20.907511 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:20.910546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:20.916534 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:32:20.921610 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:32:20.921771 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:32:20.927487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:32:20.931845 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:32:20.937171 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:32:20.942230 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:32:20.947312 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:32:20.953118 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:32:20.958529 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:32:20.974492 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:32:20.981816 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:32:20.996072 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:32:21.001927 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:32:21.002343 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:32:21.006975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:21.011290 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:21.017823 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:32:21.018175 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:21.023477 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:32:21.023639 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:32:21.029586 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:32:21.029731 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:32:21.035343 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:32:21.042726 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:32:21.043087 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:21.051612 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:32:21.056474 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:32:21.062684 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:32:21.062953 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:32:21.073509 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:32:21.073696 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:32:21.088242 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:32:21.147818 ignition[1011]: INFO : Ignition 2.19.0 Jan 20 00:32:21.147818 ignition[1011]: INFO : Stage: umount Jan 20 00:32:21.147818 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:21.147818 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:21.147818 ignition[1011]: INFO : umount: umount passed Jan 20 00:32:21.147818 ignition[1011]: INFO : Ignition finished successfully Jan 20 00:32:21.088534 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:32:21.093228 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:32:21.093383 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:32:21.108146 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:32:21.114115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:32:21.119270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:32:21.119526 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:21.127872 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:32:21.128396 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:32:21.140687 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:32:21.140861 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:32:21.149363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:32:21.150197 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:32:21.150362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:32:21.155867 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:32:21.156100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:32:21.163719 systemd[1]: Stopped target network.target - Network. Jan 20 00:32:21.167653 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:32:21.167731 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:32:21.175770 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:32:21.175951 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:32:21.180719 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:32:21.180793 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:32:21.185755 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:32:21.185814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:32:21.191022 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:32:21.191117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:32:21.193326 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:32:21.193545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:32:21.205086 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:32:21.205306 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:32:21.208940 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 20 00:32:21.210977 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:32:21.211074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:21.215435 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:32:21.215601 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:32:21.221426 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:32:21.221502 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:21.241067 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:32:21.243664 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:32:21.243743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:32:21.249466 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:32:21.249521 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:21.254294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:32:21.254349 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:21.257798 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:21.273667 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:32:21.273819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:32:21.278399 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:32:21.278620 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:21.285237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:32:21.285302 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:21.289353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:32:21.289400 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:21.294996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:32:21.295085 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:32:21.301014 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:32:21.301109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:32:21.307312 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:32:21.307394 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:21.335012 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:32:21.339149 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:32:21.339265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:21.345418 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:32:21.461652 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 20 00:32:21.345500 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:21.351717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:32:21.351775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:21.355210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:21.355290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:21.364276 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:32:21.364452 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:32:21.370507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:32:21.392200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:32:21.409097 systemd[1]: Switching root. Jan 20 00:32:21.494994 systemd-journald[195]: Journal stopped Jan 20 00:32:22.978305 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:32:22.978384 kernel: SELinux: policy capability open_perms=1 Jan 20 00:32:22.978408 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:32:22.978424 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:32:22.978436 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:32:22.978451 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:32:22.978462 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:32:22.978475 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:32:22.978487 kernel: audit: type=1403 audit(1768869141.720:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:32:22.978504 systemd[1]: Successfully loaded SELinux policy in 68.637ms. Jan 20 00:32:22.978523 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.230ms. Jan 20 00:32:22.978536 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:32:22.978549 systemd[1]: Detected virtualization kvm. Jan 20 00:32:22.978560 systemd[1]: Detected architecture x86-64. Jan 20 00:32:22.978575 systemd[1]: Detected first boot. Jan 20 00:32:22.978587 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:32:22.978599 zram_generator::config[1056]: No configuration found. Jan 20 00:32:22.978612 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:32:22.978624 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:32:22.978636 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:32:22.978647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:32:22.978659 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:32:22.978675 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:32:22.978686 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:32:22.978698 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:32:22.978709 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:32:22.978721 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:32:22.978733 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:32:22.978745 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:32:22.978757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:22.978769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:22.978784 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:32:22.978796 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:32:22.978808 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:32:22.978820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:32:22.978832 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:32:22.978845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:22.978856 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:32:22.978869 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:32:22.978926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:32:22.978941 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:32:22.978952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:22.978964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:32:22.978977 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:32:22.978989 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:32:22.979000 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:32:22.979012 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:32:22.979027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:22.979039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:22.979077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:22.979089 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:32:22.979101 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:32:22.979113 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:32:22.979125 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:32:22.979138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:22.979158 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:32:22.979184 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:32:22.979205 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:32:22.979226 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:32:22.979239 systemd[1]: Reached target machines.target - Containers. Jan 20 00:32:22.979251 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:32:22.979263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:22.979274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:32:22.979286 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:32:22.979297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:22.979313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:32:22.979328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:22.979340 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:32:22.979352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:22.979364 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:32:22.979377 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:32:22.979389 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:32:22.979401 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:32:22.979415 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:32:22.979427 kernel: fuse: init (API version 7.39) Jan 20 00:32:22.979439 kernel: ACPI: bus type drm_connector registered Jan 20 00:32:22.979450 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:32:22.979462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:32:22.979473 kernel: loop: module loaded Jan 20 00:32:22.979485 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:32:22.979497 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:32:22.979533 systemd-journald[1140]: Collecting audit messages is disabled. Jan 20 00:32:22.979559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:32:22.979572 systemd-journald[1140]: Journal started Jan 20 00:32:22.979590 systemd-journald[1140]: Runtime Journal (/run/log/journal/e8d188b9eaee46c9b489ad68167a32d4) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:32:22.526607 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:32:22.546597 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:32:22.547332 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:32:22.547796 systemd[1]: systemd-journald.service: Consumed 1.788s CPU time. Jan 20 00:32:22.985480 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:32:22.985518 systemd[1]: Stopped verity-setup.service. Jan 20 00:32:22.993945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:22.998230 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:32:23.004250 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:32:23.007494 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:32:23.010949 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:32:23.013537 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:32:23.016432 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:32:23.019534 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:32:23.023271 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:32:23.028274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:23.033490 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:32:23.033758 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:32:23.043958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:23.044414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:23.049271 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:32:23.049585 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:32:23.055970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:23.056276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:23.060983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:32:23.061319 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:32:23.065501 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:23.065765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:23.070222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:23.074572 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:32:23.079419 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:32:23.098926 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:32:23.113133 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:32:23.118681 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:32:23.122641 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:32:23.122790 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:32:23.127738 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:32:23.133731 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:32:23.141665 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:32:23.145735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:23.147790 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:32:23.153967 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:32:23.158776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:32:23.163039 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:32:23.167173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:32:23.174092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:32:23.179218 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:32:23.183971 systemd-journald[1140]: Time spent on flushing to /var/log/journal/e8d188b9eaee46c9b489ad68167a32d4 is 114.406ms for 943 entries. Jan 20 00:32:23.183971 systemd-journald[1140]: System Journal (/var/log/journal/e8d188b9eaee46c9b489ad68167a32d4) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:32:23.328698 systemd-journald[1140]: Received client request to flush runtime journal. Jan 20 00:32:23.328746 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:32:23.188686 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:32:23.197990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:23.204629 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:32:23.215998 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:32:23.222267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:32:23.296843 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:32:23.327236 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:32:23.330596 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:32:23.340922 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:32:23.358429 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:32:23.363114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:32:23.368028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:23.374829 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 20 00:32:23.375227 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 20 00:32:23.410469 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:32:23.411378 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:32:23.416555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:23.426065 kernel: loop1: detected capacity change from 0 to 229808 Jan 20 00:32:23.460923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:32:23.467469 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 00:32:23.517927 kernel: loop2: detected capacity change from 0 to 142488 Jan 20 00:32:23.621816 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:32:23.642970 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:32:23.645116 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:32:23.666931 kernel: loop4: detected capacity change from 0 to 229808 Jan 20 00:32:23.691929 kernel: loop5: detected capacity change from 0 to 142488 Jan 20 00:32:23.696977 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 20 00:32:23.697407 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 20 00:32:23.706033 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:32:23.707090 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 20 00:32:23.708650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:23.715766 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:32:23.716035 systemd[1]: Reloading... Jan 20 00:32:23.889376 zram_generator::config[1219]: No configuration found. Jan 20 00:32:24.080294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:32:24.093588 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:32:24.189104 systemd[1]: Reloading finished in 472 ms. Jan 20 00:32:24.246917 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:32:24.250432 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:32:24.254264 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:32:24.277137 systemd[1]: Starting ensure-sysext.service... Jan 20 00:32:24.284245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:32:24.288576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:24.292323 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:32:24.292358 systemd[1]: Reloading... Jan 20 00:32:24.317243 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:32:24.318415 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:32:24.320337 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:32:24.320951 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 20 00:32:24.321211 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 20 00:32:24.326040 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:32:24.326173 systemd-tmpfiles[1262]: Skipping /boot Jan 20 00:32:24.327352 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Jan 20 00:32:24.344332 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:32:24.344441 systemd-tmpfiles[1262]: Skipping /boot Jan 20 00:32:24.359980 zram_generator::config[1289]: No configuration found. Jan 20 00:32:24.433282 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1300) Jan 20 00:32:24.530664 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:32:24.535951 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:32:24.540946 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:32:24.567703 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:32:24.568139 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:32:24.592102 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:32:24.649940 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:32:24.745949 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:32:24.759804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:32:24.768495 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:32:24.769524 systemd[1]: Reloading finished in 476 ms. Jan 20 00:32:24.816180 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:24.938914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:25.111985 kernel: kvm_amd: TSC scaling supported Jan 20 00:32:25.112480 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:32:25.112596 kernel: kvm_amd: Nested Paging enabled Jan 20 00:32:25.115624 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:32:25.115664 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:32:25.400963 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:32:25.406680 systemd[1]: Finished ensure-sysext.service. Jan 20 00:32:25.561211 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:32:25.759633 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:25.812779 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:25.857086 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:32:25.871933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:25.901599 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:32:25.908987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:25.925323 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:32:25.939158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:25.959966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:25.966366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:25.976767 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:32:26.009202 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:32:26.040826 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:32:26.070027 augenrules[1384]: No rules Jan 20 00:32:26.081391 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:32:26.130194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:32:26.155775 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:32:26.198589 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:32:26.214716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:26.220660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:26.228793 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:26.242342 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:32:26.256577 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:32:26.265779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:26.277530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:26.299761 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:32:26.300173 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:32:26.303938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:26.304193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:26.308717 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:26.309128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:26.331138 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:32:26.341242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:32:26.422166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:26.442852 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:32:26.445826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:32:26.446016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:32:26.448106 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:32:26.455305 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:32:26.456761 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:32:26.460213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:32:26.511155 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:32:26.518199 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:32:26.704249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:32:26.715858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:26.722413 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:32:26.823822 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:32:26.825154 systemd-resolved[1391]: Positive Trust Anchors: Jan 20 00:32:26.825172 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:32:26.825201 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:32:26.831498 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:32:26.832553 systemd-networkd[1389]: lo: Link UP Jan 20 00:32:26.832563 systemd-networkd[1389]: lo: Gained carrier Jan 20 00:32:26.833214 systemd-resolved[1391]: Defaulting to hostname 'linux'. Jan 20 00:32:26.837536 systemd-networkd[1389]: Enumeration completed Jan 20 00:32:26.839730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:32:26.840374 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:26.840400 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:32:26.842278 systemd-networkd[1389]: eth0: Link UP Jan 20 00:32:26.842306 systemd-networkd[1389]: eth0: Gained carrier Jan 20 00:32:26.842323 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:26.851667 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:32:26.874187 systemd[1]: Reached target network.target - Network. Jan 20 00:32:26.901698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:26.901699 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:32:26.906147 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Jan 20 00:32:26.907545 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:32:27.450469 systemd-resolved[1391]: Clock change detected. Flushing caches. Jan 20 00:32:27.450538 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:32:27.450600 systemd-timesyncd[1392]: Initial clock synchronization to Tue 2026-01-20 00:32:27.450351 UTC. Jan 20 00:32:27.454488 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:32:27.459454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:32:27.474041 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:32:27.480313 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:32:27.483643 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:32:27.486929 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:32:27.487043 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:32:27.489402 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:32:27.493415 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:32:27.498194 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:32:27.510963 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:32:27.515935 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:32:27.519767 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:32:27.526396 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:32:27.540945 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:32:27.543497 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:32:27.543547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:32:27.545260 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:32:27.549743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:32:27.555195 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:32:27.561438 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:32:27.564735 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:32:27.567965 jq[1430]: false Jan 20 00:32:27.568510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:32:27.575223 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:32:27.578990 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:32:27.582950 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:32:27.593880 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:32:27.597563 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:32:27.598197 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:32:27.605512 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:32:27.608530 dbus-daemon[1429]: [system] SELinux support is enabled Jan 20 00:32:27.612809 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:32:27.620888 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:32:27.622923 extend-filesystems[1431]: Found loop3 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found loop4 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found loop5 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found sr0 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda1 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda2 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda3 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found usr Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda4 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda6 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda7 Jan 20 00:32:27.622923 extend-filesystems[1431]: Found vda9 Jan 20 00:32:27.622923 extend-filesystems[1431]: Checking size of /dev/vda9 Jan 20 00:32:27.711936 extend-filesystems[1431]: Resized partition /dev/vda9 Jan 20 00:32:27.717257 update_engine[1438]: I20260120 00:32:27.703898 1438 main.cc:92] Flatcar Update Engine starting Jan 20 00:32:27.717257 update_engine[1438]: I20260120 00:32:27.707223 1438 update_check_scheduler.cc:74] Next update check in 2m27s Jan 20 00:32:27.722406 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:32:27.637226 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:32:27.722593 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:32:27.737740 jq[1441]: true Jan 20 00:32:27.637450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:32:27.647328 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:32:27.739282 jq[1450]: true Jan 20 00:32:27.647554 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:32:27.653269 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:32:27.653512 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:32:27.663297 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:32:27.663337 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:32:27.667027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:32:27.667061 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:32:27.674417 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:32:27.708318 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:32:27.722383 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:32:27.752097 tar[1448]: linux-amd64/LICENSE Jan 20 00:32:27.752097 tar[1448]: linux-amd64/helm Jan 20 00:32:27.769697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1335) Jan 20 00:32:27.783614 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:32:27.784282 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:32:27.785466 systemd-logind[1436]: New seat seat0. Jan 20 00:32:27.787405 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:32:27.804219 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:32:27.825280 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:32:27.825280 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:32:27.825280 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:32:27.877241 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jan 20 00:32:27.840890 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:32:27.892594 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:32:27.841179 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:32:27.896579 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:32:27.900909 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:32:27.918377 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:32:27.920624 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:32:27.958613 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:32:27.970592 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:32:27.985149 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:32:27.985412 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:32:28.015863 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:32:28.055638 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:32:28.067093 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:32:28.072108 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:32:28.077893 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:32:28.117306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:32:28.143986 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:41008.service - OpenSSH per-connection server daemon (10.0.0.1:41008). Jan 20 00:32:28.230574 sshd[1512]: Accepted publickey for core from 10.0.0.1 port 41008 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:28.232353 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:28.260368 systemd-logind[1436]: New session 1 of user core. Jan 20 00:32:28.263385 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:32:28.315782 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:32:28.360449 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:32:28.380190 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:32:28.391749 (systemd)[1517]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:32:28.399594 containerd[1456]: time="2026-01-20T00:32:28.399449891Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:32:28.479390 containerd[1456]: time="2026-01-20T00:32:28.478470708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.479390 containerd[1456]: time="2026-01-20T00:32:28.482456761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:28.479390 containerd[1456]: time="2026-01-20T00:32:28.482487979Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:32:28.479390 containerd[1456]: time="2026-01-20T00:32:28.482554093Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.482961002Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.482983965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483075215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483101464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483364596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483383442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483396816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483406134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483504648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.483976 containerd[1456]: time="2026-01-20T00:32:28.483893724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:28.484688 containerd[1456]: time="2026-01-20T00:32:28.484047772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:28.484688 containerd[1456]: time="2026-01-20T00:32:28.484071997Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:32:28.484688 containerd[1456]: time="2026-01-20T00:32:28.484221527Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:32:28.484688 containerd[1456]: time="2026-01-20T00:32:28.484291447Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:32:28.490979 containerd[1456]: time="2026-01-20T00:32:28.490488972Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:32:28.490979 containerd[1456]: time="2026-01-20T00:32:28.490587977Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:32:28.490979 containerd[1456]: time="2026-01-20T00:32:28.490719713Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:32:28.490979 containerd[1456]: time="2026-01-20T00:32:28.490764046Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:32:28.490979 containerd[1456]: time="2026-01-20T00:32:28.490867569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:32:28.491199 containerd[1456]: time="2026-01-20T00:32:28.491133005Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:32:28.491710 containerd[1456]: time="2026-01-20T00:32:28.491603912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:32:28.493407 containerd[1456]: time="2026-01-20T00:32:28.493343440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:32:28.493407 containerd[1456]: time="2026-01-20T00:32:28.493397181Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:32:28.493464 containerd[1456]: time="2026-01-20T00:32:28.493417338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:32:28.493464 containerd[1456]: time="2026-01-20T00:32:28.493436825Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493464 containerd[1456]: time="2026-01-20T00:32:28.493453736Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493527 containerd[1456]: time="2026-01-20T00:32:28.493468794Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493527 containerd[1456]: time="2026-01-20T00:32:28.493484604Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493527 containerd[1456]: time="2026-01-20T00:32:28.493501485Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493527 containerd[1456]: time="2026-01-20T00:32:28.493517085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493589 containerd[1456]: time="2026-01-20T00:32:28.493532513Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493589 containerd[1456]: time="2026-01-20T00:32:28.493546900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:32:28.493589 containerd[1456]: time="2026-01-20T00:32:28.493568701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493589 containerd[1456]: time="2026-01-20T00:32:28.493584881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493707 containerd[1456]: time="2026-01-20T00:32:28.493600170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493707 containerd[1456]: time="2026-01-20T00:32:28.493614166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493707 containerd[1456]: time="2026-01-20T00:32:28.493629284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493707 containerd[1456]: time="2026-01-20T00:32:28.493644262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493901 containerd[1456]: time="2026-01-20T00:32:28.493779314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493901 containerd[1456]: time="2026-01-20T00:32:28.493797208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493901 containerd[1456]: time="2026-01-20T00:32:28.493890181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493950 containerd[1456]: time="2026-01-20T00:32:28.493930096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493969 containerd[1456]: time="2026-01-20T00:32:28.493947227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.493969 containerd[1456]: time="2026-01-20T00:32:28.493963708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.494010 containerd[1456]: time="2026-01-20T00:32:28.493978356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.494010 containerd[1456]: time="2026-01-20T00:32:28.493996289Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:32:28.494044 containerd[1456]: time="2026-01-20T00:32:28.494020324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.494044 containerd[1456]: time="2026-01-20T00:32:28.494036575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.494077 containerd[1456]: time="2026-01-20T00:32:28.494049729Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:32:28.494285 containerd[1456]: time="2026-01-20T00:32:28.494138995Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:32:28.494285 containerd[1456]: time="2026-01-20T00:32:28.494198857Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:32:28.494285 containerd[1456]: time="2026-01-20T00:32:28.494234814Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:32:28.494285 containerd[1456]: time="2026-01-20T00:32:28.494265512Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:32:28.494285 containerd[1456]: time="2026-01-20T00:32:28.494280680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.494387 containerd[1456]: time="2026-01-20T00:32:28.494301069Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:32:28.494387 containerd[1456]: time="2026-01-20T00:32:28.494313311Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:32:28.494387 containerd[1456]: time="2026-01-20T00:32:28.494329762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:32:28.495844 containerd[1456]: time="2026-01-20T00:32:28.494968815Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:32:28.495844 containerd[1456]: time="2026-01-20T00:32:28.495037333Z" level=info msg="Connect containerd service" Jan 20 00:32:28.495844 containerd[1456]: time="2026-01-20T00:32:28.495100471Z" level=info msg="using legacy CRI server" Jan 20 00:32:28.495844 containerd[1456]: time="2026-01-20T00:32:28.495115609Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:32:28.495844 containerd[1456]: time="2026-01-20T00:32:28.495415299Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:32:28.496996 containerd[1456]: time="2026-01-20T00:32:28.496894411Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:32:28.497566 containerd[1456]: time="2026-01-20T00:32:28.497498752Z" level=info msg="Start subscribing containerd event" Jan 20 00:32:28.497903 containerd[1456]: time="2026-01-20T00:32:28.497734051Z" level=info msg="Start recovering state" Jan 20 00:32:28.501541 containerd[1456]: time="2026-01-20T00:32:28.501517467Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:32:28.501806 containerd[1456]: time="2026-01-20T00:32:28.501734262Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:32:28.502064 containerd[1456]: time="2026-01-20T00:32:28.501952025Z" level=info msg="Start event monitor" Jan 20 00:32:28.502064 containerd[1456]: time="2026-01-20T00:32:28.502002199Z" level=info msg="Start snapshots syncer" Jan 20 00:32:28.502064 containerd[1456]: time="2026-01-20T00:32:28.502026184Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:32:28.502064 containerd[1456]: time="2026-01-20T00:32:28.502041272Z" level=info msg="Start streaming server" Jan 20 00:32:28.502368 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:32:28.506190 containerd[1456]: time="2026-01-20T00:32:28.506056901Z" level=info msg="containerd successfully booted in 0.116985s" Jan 20 00:32:28.572786 tar[1448]: linux-amd64/README.md Jan 20 00:32:28.606943 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:32:28.618423 systemd[1517]: Queued start job for default target default.target. Jan 20 00:32:28.647803 systemd[1517]: Created slice app.slice - User Application Slice. Jan 20 00:32:28.647876 systemd[1517]: Reached target paths.target - Paths. Jan 20 00:32:28.647892 systemd[1517]: Reached target timers.target - Timers. Jan 20 00:32:28.650485 systemd[1517]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:32:28.667005 systemd[1517]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:32:28.667227 systemd[1517]: Reached target sockets.target - Sockets. Jan 20 00:32:28.667276 systemd[1517]: Reached target basic.target - Basic System. Jan 20 00:32:28.667337 systemd[1517]: Reached target default.target - Main User Target. Jan 20 00:32:28.667393 systemd[1517]: Startup finished in 260ms. Jan 20 00:32:28.667781 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:32:28.803585 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:32:28.848075 systemd-networkd[1389]: eth0: Gained IPv6LL Jan 20 00:32:28.853097 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:32:28.857625 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:32:28.875133 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:32:28.881207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:28.889900 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:32:28.943859 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:32:28.968870 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:41018.service - OpenSSH per-connection server daemon (10.0.0.1:41018). Jan 20 00:32:28.973196 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:32:28.973603 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:32:28.978629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:32:29.030257 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 41018 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:29.032542 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:29.039882 systemd-logind[1436]: New session 2 of user core. Jan 20 00:32:29.051960 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:32:29.125250 sshd[1551]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:29.140275 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:41018.service: Deactivated successfully. Jan 20 00:32:29.142283 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:32:29.144184 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:32:29.145780 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:41026.service - OpenSSH per-connection server daemon (10.0.0.1:41026). Jan 20 00:32:29.169757 systemd-logind[1436]: Removed session 2. Jan 20 00:32:29.219199 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 41026 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:29.226204 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:29.234958 systemd-logind[1436]: New session 3 of user core. Jan 20 00:32:29.244951 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:32:29.707854 sshd[1559]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:29.758326 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:41026.service: Deactivated successfully. Jan 20 00:32:29.771262 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:32:29.777101 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:32:29.787299 systemd-logind[1436]: Removed session 3. Jan 20 00:32:32.797096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:32.801608 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:32:32.805439 systemd[1]: Startup finished in 1.926s (kernel) + 8.852s (initrd) + 10.607s (userspace) = 21.385s. Jan 20 00:32:32.865275 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:34.789382 kubelet[1570]: E0120 00:32:34.788759 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:34.793037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:34.793461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:34.794130 systemd[1]: kubelet.service: Consumed 5.421s CPU time. Jan 20 00:32:39.765979 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:43886.service - OpenSSH per-connection server daemon (10.0.0.1:43886). Jan 20 00:32:39.817596 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 43886 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:39.820320 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:39.827728 systemd-logind[1436]: New session 4 of user core. Jan 20 00:32:39.837892 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:32:40.033988 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:40.041762 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:43886.service: Deactivated successfully. Jan 20 00:32:40.044031 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:32:40.045900 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:32:40.054073 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:43898.service - OpenSSH per-connection server daemon (10.0.0.1:43898). Jan 20 00:32:40.060932 systemd-logind[1436]: Removed session 4. Jan 20 00:32:40.096018 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 43898 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:40.099201 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:40.129719 systemd-logind[1436]: New session 5 of user core. Jan 20 00:32:40.141974 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:32:40.208010 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:40.294449 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:43898.service: Deactivated successfully. Jan 20 00:32:40.303747 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:32:40.306196 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:32:40.345635 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:43904.service - OpenSSH per-connection server daemon (10.0.0.1:43904). Jan 20 00:32:40.352724 systemd-logind[1436]: Removed session 5. Jan 20 00:32:40.529439 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 43904 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:40.630724 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:40.723810 systemd-logind[1436]: New session 6 of user core. Jan 20 00:32:40.765322 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:32:41.196253 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:41.268451 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:43904.service: Deactivated successfully. Jan 20 00:32:41.276797 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:32:41.281053 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:32:41.294413 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:43916.service - OpenSSH per-connection server daemon (10.0.0.1:43916). Jan 20 00:32:41.309977 systemd-logind[1436]: Removed session 6. Jan 20 00:32:41.375067 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 43916 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:32:41.377960 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:41.383968 systemd-logind[1436]: New session 7 of user core. Jan 20 00:32:41.393855 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:32:41.474089 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:32:41.474633 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:42.797006 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:32:42.798991 (dockerd)[1626]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:32:44.266975 dockerd[1626]: time="2026-01-20T00:32:44.265901321Z" level=info msg="Starting up" Jan 20 00:32:44.441191 dockerd[1626]: time="2026-01-20T00:32:44.441110341Z" level=info msg="Loading containers: start." Jan 20 00:32:44.641719 kernel: Initializing XFRM netlink socket Jan 20 00:32:44.784395 systemd-networkd[1389]: docker0: Link UP Jan 20 00:32:44.824924 dockerd[1626]: time="2026-01-20T00:32:44.824467794Z" level=info msg="Loading containers: done." Jan 20 00:32:44.851631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:32:44.852583 dockerd[1626]: time="2026-01-20T00:32:44.852375631Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:32:44.852583 dockerd[1626]: time="2026-01-20T00:32:44.852490285Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:32:44.852730 dockerd[1626]: time="2026-01-20T00:32:44.852681181Z" level=info msg="Daemon has completed initialization" Jan 20 00:32:44.864064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:44.909703 dockerd[1626]: time="2026-01-20T00:32:44.909257250Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:32:44.910755 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:32:45.532519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:45.539445 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:45.693358 kubelet[1780]: E0120 00:32:45.692712 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:45.702018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:45.702356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:45.927051 containerd[1456]: time="2026-01-20T00:32:45.920290585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 00:32:47.142123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1696577167.mount: Deactivated successfully. Jan 20 00:32:51.425136 containerd[1456]: time="2026-01-20T00:32:51.424431283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:51.425136 containerd[1456]: time="2026-01-20T00:32:51.425503723Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 00:32:51.428833 containerd[1456]: time="2026-01-20T00:32:51.427520121Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:51.439790 containerd[1456]: time="2026-01-20T00:32:51.439724010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:51.441057 containerd[1456]: time="2026-01-20T00:32:51.441031081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 5.520691064s" Jan 20 00:32:51.441143 containerd[1456]: time="2026-01-20T00:32:51.441079280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 00:32:51.444855 containerd[1456]: time="2026-01-20T00:32:51.444535312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 00:32:54.817128 containerd[1456]: time="2026-01-20T00:32:54.816600912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:54.817128 containerd[1456]: time="2026-01-20T00:32:54.817501348Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 00:32:54.820619 containerd[1456]: time="2026-01-20T00:32:54.818912717Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:54.824037 containerd[1456]: time="2026-01-20T00:32:54.823812249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:54.839513 containerd[1456]: time="2026-01-20T00:32:54.826467369Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.381892504s" Jan 20 00:32:54.839513 containerd[1456]: time="2026-01-20T00:32:54.838148482Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 00:32:54.842338 containerd[1456]: time="2026-01-20T00:32:54.842234843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 00:32:56.040497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:32:56.060295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:56.519935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:56.678558 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:57.046573 kubelet[1863]: E0120 00:32:57.046389 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:57.054755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:57.055131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:57.056503 systemd[1]: kubelet.service: Consumed 1.019s CPU time. Jan 20 00:32:57.767084 containerd[1456]: time="2026-01-20T00:32:57.766424359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:57.767084 containerd[1456]: time="2026-01-20T00:32:57.767629286Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 00:32:57.771244 containerd[1456]: time="2026-01-20T00:32:57.769460798Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:57.773371 containerd[1456]: time="2026-01-20T00:32:57.773307658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:57.775071 containerd[1456]: time="2026-01-20T00:32:57.774964228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.932677397s" Jan 20 00:32:57.775071 containerd[1456]: time="2026-01-20T00:32:57.775053785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 00:32:57.777796 containerd[1456]: time="2026-01-20T00:32:57.777568807Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 00:33:01.011901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271865651.mount: Deactivated successfully. Jan 20 00:33:03.776131 containerd[1456]: time="2026-01-20T00:33:03.775227657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:03.776131 containerd[1456]: time="2026-01-20T00:33:03.776300787Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 00:33:03.780557 containerd[1456]: time="2026-01-20T00:33:03.778474176Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:03.783962 containerd[1456]: time="2026-01-20T00:33:03.783842893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:03.785381 containerd[1456]: time="2026-01-20T00:33:03.785193104Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 6.00758158s" Jan 20 00:33:03.785381 containerd[1456]: time="2026-01-20T00:33:03.785291097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 00:33:03.788301 containerd[1456]: time="2026-01-20T00:33:03.788233583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 00:33:04.794407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765767528.mount: Deactivated successfully. Jan 20 00:33:07.113600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 00:33:07.256637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:08.012038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:08.012374 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:33:08.093492 containerd[1456]: time="2026-01-20T00:33:08.092901344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:08.097081 containerd[1456]: time="2026-01-20T00:33:08.095167275Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 00:33:08.098806 containerd[1456]: time="2026-01-20T00:33:08.098715534Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:08.107215 containerd[1456]: time="2026-01-20T00:33:08.107130975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:08.109141 containerd[1456]: time="2026-01-20T00:33:08.109063860Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.32078283s" Jan 20 00:33:08.109141 containerd[1456]: time="2026-01-20T00:33:08.109127598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 00:33:08.111391 containerd[1456]: time="2026-01-20T00:33:08.111348365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:33:08.348706 kubelet[1939]: E0120 00:33:08.348543 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:33:08.355345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:33:08.355867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:33:08.356404 systemd[1]: kubelet.service: Consumed 1.084s CPU time. Jan 20 00:33:08.795157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622648891.mount: Deactivated successfully. Jan 20 00:33:08.822376 containerd[1456]: time="2026-01-20T00:33:08.821489995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:08.825831 containerd[1456]: time="2026-01-20T00:33:08.824975367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:33:08.827721 containerd[1456]: time="2026-01-20T00:33:08.827622896Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:08.833089 containerd[1456]: time="2026-01-20T00:33:08.832971476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:08.834057 containerd[1456]: time="2026-01-20T00:33:08.833921515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 722.520823ms" Jan 20 00:33:08.834057 containerd[1456]: time="2026-01-20T00:33:08.833986074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:33:08.838504 containerd[1456]: time="2026-01-20T00:33:08.838316224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 00:33:09.427217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126758258.mount: Deactivated successfully. Jan 20 00:33:13.452367 update_engine[1438]: I20260120 00:33:13.450219 1438 update_attempter.cc:509] Updating boot flags... Jan 20 00:33:13.651131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2012) Jan 20 00:33:14.191746 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2012) Jan 20 00:33:14.546887 containerd[1456]: time="2026-01-20T00:33:14.545157232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:14.546887 containerd[1456]: time="2026-01-20T00:33:14.546186272Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 00:33:14.548430 containerd[1456]: time="2026-01-20T00:33:14.547304403Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:14.552469 containerd[1456]: time="2026-01-20T00:33:14.552379225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:14.554520 containerd[1456]: time="2026-01-20T00:33:14.554405158Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.71605405s" Jan 20 00:33:14.554520 containerd[1456]: time="2026-01-20T00:33:14.554475379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 00:33:18.597516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 00:33:18.610315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:18.957084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:18.985446 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:33:19.087022 kubelet[2054]: E0120 00:33:19.086351 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:33:19.123474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:33:19.124038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:33:19.426222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:19.446097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:19.479495 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-7.scope)... Jan 20 00:33:19.479541 systemd[1]: Reloading... Jan 20 00:33:19.640825 zram_generator::config[2108]: No configuration found. Jan 20 00:33:19.818289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:33:19.909920 systemd[1]: Reloading finished in 429 ms. Jan 20 00:33:20.016387 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:20.031369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:20.032089 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:33:20.032465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:20.043210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:20.228982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:20.254451 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:20.319488 kubelet[2159]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:20.319488 kubelet[2159]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:33:20.319488 kubelet[2159]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:20.320118 kubelet[2159]: I0120 00:33:20.319624 2159 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:33:20.581096 kubelet[2159]: I0120 00:33:20.580946 2159 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:33:20.581096 kubelet[2159]: I0120 00:33:20.580997 2159 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:33:20.581338 kubelet[2159]: I0120 00:33:20.581264 2159 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:33:20.606444 kubelet[2159]: E0120 00:33:20.606370 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:33:20.607149 kubelet[2159]: I0120 00:33:20.607081 2159 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:33:20.623175 kubelet[2159]: E0120 00:33:20.623071 2159 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:33:20.623282 kubelet[2159]: I0120 00:33:20.623203 2159 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:33:20.633110 kubelet[2159]: I0120 00:33:20.633026 2159 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:33:20.633586 kubelet[2159]: I0120 00:33:20.633504 2159 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:33:20.634060 kubelet[2159]: I0120 00:33:20.633542 2159 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:33:20.634320 kubelet[2159]: I0120 00:33:20.634069 2159 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:33:20.634320 kubelet[2159]: I0120 00:33:20.634083 2159 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:33:20.634413 kubelet[2159]: I0120 00:33:20.634332 2159 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:20.636525 kubelet[2159]: I0120 00:33:20.636464 2159 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:33:20.636525 kubelet[2159]: I0120 00:33:20.636498 2159 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:33:20.636697 kubelet[2159]: I0120 00:33:20.636595 2159 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:33:20.638360 kubelet[2159]: I0120 00:33:20.638309 2159 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:33:20.644476 kubelet[2159]: I0120 00:33:20.644423 2159 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:33:20.645086 kubelet[2159]: E0120 00:33:20.645030 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:33:20.645291 kubelet[2159]: E0120 00:33:20.645237 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:33:20.645529 kubelet[2159]: I0120 00:33:20.645470 2159 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:33:20.647118 kubelet[2159]: W0120 00:33:20.647035 2159 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:33:20.652807 kubelet[2159]: I0120 00:33:20.652778 2159 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:33:20.652921 kubelet[2159]: I0120 00:33:20.652900 2159 server.go:1289] "Started kubelet" Jan 20 00:33:20.655309 kubelet[2159]: I0120 00:33:20.653824 2159 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:33:20.655309 kubelet[2159]: I0120 00:33:20.654901 2159 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:33:20.655595 kubelet[2159]: I0120 00:33:20.655552 2159 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:33:20.656763 kubelet[2159]: I0120 00:33:20.656725 2159 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:33:20.657498 kubelet[2159]: I0120 00:33:20.657398 2159 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:33:20.658739 kubelet[2159]: E0120 00:33:20.657242 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c492a870a69da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:33:20.652823002 +0000 UTC m=+0.391909765,LastTimestamp:2026-01-20 00:33:20.652823002 +0000 UTC m=+0.391909765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:33:20.659356 kubelet[2159]: E0120 00:33:20.659270 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:20.659356 kubelet[2159]: I0120 00:33:20.659301 2159 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:33:20.659455 kubelet[2159]: I0120 00:33:20.659386 2159 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:33:20.664707 kubelet[2159]: I0120 00:33:20.662098 2159 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:33:20.664707 kubelet[2159]: I0120 00:33:20.662266 2159 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:33:20.664707 kubelet[2159]: E0120 00:33:20.663567 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:33:20.664707 kubelet[2159]: E0120 00:33:20.663911 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Jan 20 00:33:20.665597 kubelet[2159]: E0120 00:33:20.665561 2159 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:33:20.666434 kubelet[2159]: I0120 00:33:20.666318 2159 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:33:20.666508 kubelet[2159]: I0120 00:33:20.666494 2159 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:33:20.670898 kubelet[2159]: I0120 00:33:20.670850 2159 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:33:20.673023 kubelet[2159]: I0120 00:33:20.672949 2159 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:33:20.692631 kubelet[2159]: I0120 00:33:20.692567 2159 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:33:20.692631 kubelet[2159]: I0120 00:33:20.692613 2159 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:33:20.692631 kubelet[2159]: I0120 00:33:20.692638 2159 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:20.699197 kubelet[2159]: I0120 00:33:20.699101 2159 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:33:20.699503 kubelet[2159]: I0120 00:33:20.699456 2159 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:33:20.699563 kubelet[2159]: I0120 00:33:20.699543 2159 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:33:20.699618 kubelet[2159]: I0120 00:33:20.699584 2159 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:33:20.699989 kubelet[2159]: E0120 00:33:20.699891 2159 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:33:20.760600 kubelet[2159]: E0120 00:33:20.760115 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:20.769557 kubelet[2159]: I0120 00:33:20.769503 2159 policy_none.go:49] "None policy: Start" Jan 20 00:33:20.769708 kubelet[2159]: I0120 00:33:20.769583 2159 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:33:20.769708 kubelet[2159]: I0120 00:33:20.769636 2159 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:33:20.770391 kubelet[2159]: E0120 00:33:20.770294 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:33:20.779871 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:33:20.798577 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:33:20.800171 kubelet[2159]: E0120 00:33:20.800132 2159 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 00:33:20.802464 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:33:20.813926 kubelet[2159]: E0120 00:33:20.813874 2159 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:33:20.814165 kubelet[2159]: I0120 00:33:20.814126 2159 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:33:20.814165 kubelet[2159]: I0120 00:33:20.814144 2159 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:33:20.814498 kubelet[2159]: I0120 00:33:20.814449 2159 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:33:20.816859 kubelet[2159]: E0120 00:33:20.816825 2159 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:33:20.816947 kubelet[2159]: E0120 00:33:20.816880 2159 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:33:20.867527 kubelet[2159]: E0120 00:33:20.864897 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Jan 20 00:33:20.936946 kubelet[2159]: I0120 00:33:20.936412 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:20.936946 kubelet[2159]: E0120 00:33:20.937796 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 20 00:33:21.046842 systemd[1]: Created slice kubepods-burstable-pod1ba34fc72daeab8ea28e7a418c83523e.slice - libcontainer container kubepods-burstable-pod1ba34fc72daeab8ea28e7a418c83523e.slice. Jan 20 00:33:21.069962 kubelet[2159]: I0120 00:33:21.069639 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:21.069962 kubelet[2159]: I0120 00:33:21.069985 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:21.069962 kubelet[2159]: I0120 00:33:21.070063 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:21.069962 kubelet[2159]: I0120 00:33:21.070089 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:21.069962 kubelet[2159]: I0120 00:33:21.070107 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ba34fc72daeab8ea28e7a418c83523e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ba34fc72daeab8ea28e7a418c83523e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:21.071358 kubelet[2159]: I0120 00:33:21.070169 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ba34fc72daeab8ea28e7a418c83523e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ba34fc72daeab8ea28e7a418c83523e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:21.071358 kubelet[2159]: I0120 00:33:21.070207 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:21.071358 kubelet[2159]: I0120 00:33:21.070235 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:21.071358 kubelet[2159]: I0120 00:33:21.070315 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ba34fc72daeab8ea28e7a418c83523e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ba34fc72daeab8ea28e7a418c83523e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:21.106496 kubelet[2159]: E0120 00:33:21.106318 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:21.188236 kubelet[2159]: I0120 00:33:21.187221 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:21.201337 kubelet[2159]: E0120 00:33:21.194765 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 20 00:33:21.188236 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 00:33:21.235143 kubelet[2159]: E0120 00:33:21.235031 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:21.247248 kubelet[2159]: E0120 00:33:21.247179 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:21.251979 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 00:33:21.254829 containerd[1456]: time="2026-01-20T00:33:21.254735090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:21.259805 kubelet[2159]: E0120 00:33:21.254930 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:21.261335 kubelet[2159]: E0120 00:33:21.261262 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:21.273301 kubelet[2159]: E0120 00:33:21.273095 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Jan 20 00:33:21.274457 containerd[1456]: time="2026-01-20T00:33:21.274124225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:21.424626 kubelet[2159]: E0120 00:33:21.423614 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:21.457831 containerd[1456]: time="2026-01-20T00:33:21.455407686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ba34fc72daeab8ea28e7a418c83523e,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:21.593464 kubelet[2159]: E0120 00:33:21.592698 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:33:21.644696 kubelet[2159]: I0120 00:33:21.644131 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:21.646845 kubelet[2159]: E0120 00:33:21.646808 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 20 00:33:21.767862 kubelet[2159]: E0120 00:33:21.756462 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:33:21.967424 kubelet[2159]: E0120 00:33:21.966013 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:33:22.088536 kubelet[2159]: E0120 00:33:22.088453 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Jan 20 00:33:22.118015 kubelet[2159]: E0120 00:33:22.117372 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:33:22.278996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221427899.mount: Deactivated successfully. Jan 20 00:33:22.388057 containerd[1456]: time="2026-01-20T00:33:22.382724992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:22.388057 containerd[1456]: time="2026-01-20T00:33:22.387217155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:33:22.390482 containerd[1456]: time="2026-01-20T00:33:22.389209916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:22.390758 containerd[1456]: time="2026-01-20T00:33:22.390640420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:22.392140 containerd[1456]: time="2026-01-20T00:33:22.392067416Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:22.393298 containerd[1456]: time="2026-01-20T00:33:22.393218579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:33:22.400434 containerd[1456]: time="2026-01-20T00:33:22.399405840Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:33:22.404898 containerd[1456]: time="2026-01-20T00:33:22.404745456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:22.411884 containerd[1456]: time="2026-01-20T00:33:22.410819283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.136544489s" Jan 20 00:33:22.417562 containerd[1456]: time="2026-01-20T00:33:22.417499040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.162487716s" Jan 20 00:33:22.443714 containerd[1456]: time="2026-01-20T00:33:22.443014239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 986.915296ms" Jan 20 00:33:22.453462 kubelet[2159]: I0120 00:33:22.453369 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:22.454294 kubelet[2159]: E0120 00:33:22.454117 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 20 00:33:22.677532 kubelet[2159]: E0120 00:33:22.675419 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:33:22.720972 containerd[1456]: time="2026-01-20T00:33:22.715492246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:22.720972 containerd[1456]: time="2026-01-20T00:33:22.717716741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:22.720972 containerd[1456]: time="2026-01-20T00:33:22.717748369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:22.720972 containerd[1456]: time="2026-01-20T00:33:22.718331733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:22.728570 containerd[1456]: time="2026-01-20T00:33:22.727104890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:22.728570 containerd[1456]: time="2026-01-20T00:33:22.727164200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:22.728570 containerd[1456]: time="2026-01-20T00:33:22.727178206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:22.728570 containerd[1456]: time="2026-01-20T00:33:22.727407032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:22.741699 containerd[1456]: time="2026-01-20T00:33:22.741125625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:22.741699 containerd[1456]: time="2026-01-20T00:33:22.741184705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:22.741699 containerd[1456]: time="2026-01-20T00:33:22.741200966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:22.741699 containerd[1456]: time="2026-01-20T00:33:22.741297495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:22.850789 systemd[1]: Started cri-containerd-322881fc7c12c2bfffb0bac847764314e34d8aa78d7d60fbe00f492b5cdf35f8.scope - libcontainer container 322881fc7c12c2bfffb0bac847764314e34d8aa78d7d60fbe00f492b5cdf35f8. Jan 20 00:33:22.880384 systemd[1]: Started cri-containerd-d72cc47d1f7ea126a6ae4a7cb636f66515dffe3d92779acb407d5cd30408d548.scope - libcontainer container d72cc47d1f7ea126a6ae4a7cb636f66515dffe3d92779acb407d5cd30408d548. Jan 20 00:33:22.882448 systemd[1]: Started cri-containerd-e3454329b53c856e6de864ebbeea639a78d78957b42eeb4c717b68663fc28d85.scope - libcontainer container e3454329b53c856e6de864ebbeea639a78d78957b42eeb4c717b68663fc28d85. Jan 20 00:33:23.047157 containerd[1456]: time="2026-01-20T00:33:23.045209583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ba34fc72daeab8ea28e7a418c83523e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d72cc47d1f7ea126a6ae4a7cb636f66515dffe3d92779acb407d5cd30408d548\"" Jan 20 00:33:23.047157 containerd[1456]: time="2026-01-20T00:33:23.045256200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3454329b53c856e6de864ebbeea639a78d78957b42eeb4c717b68663fc28d85\"" Jan 20 00:33:23.051830 kubelet[2159]: E0120 00:33:23.051282 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.051830 kubelet[2159]: E0120 00:33:23.051368 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.053286 containerd[1456]: time="2026-01-20T00:33:23.053181001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"322881fc7c12c2bfffb0bac847764314e34d8aa78d7d60fbe00f492b5cdf35f8\"" Jan 20 00:33:23.055170 kubelet[2159]: E0120 00:33:23.055102 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.058776 containerd[1456]: time="2026-01-20T00:33:23.058741005Z" level=info msg="CreateContainer within sandbox \"d72cc47d1f7ea126a6ae4a7cb636f66515dffe3d92779acb407d5cd30408d548\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:33:23.067425 containerd[1456]: time="2026-01-20T00:33:23.066614787Z" level=info msg="CreateContainer within sandbox \"e3454329b53c856e6de864ebbeea639a78d78957b42eeb4c717b68663fc28d85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:33:23.069128 containerd[1456]: time="2026-01-20T00:33:23.068632405Z" level=info msg="CreateContainer within sandbox \"322881fc7c12c2bfffb0bac847764314e34d8aa78d7d60fbe00f492b5cdf35f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:33:23.097194 containerd[1456]: time="2026-01-20T00:33:23.096617211Z" level=info msg="CreateContainer within sandbox \"d72cc47d1f7ea126a6ae4a7cb636f66515dffe3d92779acb407d5cd30408d548\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"39f567fd874a6f5f1c7dd0b03dd95f15fc867ae2aac6c6c02d04be41558d09ea\"" Jan 20 00:33:23.102261 containerd[1456]: time="2026-01-20T00:33:23.102203264Z" level=info msg="StartContainer for \"39f567fd874a6f5f1c7dd0b03dd95f15fc867ae2aac6c6c02d04be41558d09ea\"" Jan 20 00:33:23.103936 containerd[1456]: time="2026-01-20T00:33:23.103880761Z" level=info msg="CreateContainer within sandbox \"322881fc7c12c2bfffb0bac847764314e34d8aa78d7d60fbe00f492b5cdf35f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3fde0f3ac6e314bb5117e6bbdc2c5e78a12df0c0d808afbe639362d7c4c70a1\"" Jan 20 00:33:23.104423 containerd[1456]: time="2026-01-20T00:33:23.104376070Z" level=info msg="StartContainer for \"d3fde0f3ac6e314bb5117e6bbdc2c5e78a12df0c0d808afbe639362d7c4c70a1\"" Jan 20 00:33:23.106592 containerd[1456]: time="2026-01-20T00:33:23.106517449Z" level=info msg="CreateContainer within sandbox \"e3454329b53c856e6de864ebbeea639a78d78957b42eeb4c717b68663fc28d85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49f467e996899fa940900ddc3f4332c062b7e18b220bc2b34ecfd1899cba0947\"" Jan 20 00:33:23.107114 containerd[1456]: time="2026-01-20T00:33:23.107053537Z" level=info msg="StartContainer for \"49f467e996899fa940900ddc3f4332c062b7e18b220bc2b34ecfd1899cba0947\"" Jan 20 00:33:24.311720 kubelet[2159]: E0120 00:33:24.309317 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="3.2s" Jan 20 00:33:24.311720 kubelet[2159]: E0120 00:33:24.309793 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:33:24.354249 kubelet[2159]: E0120 00:33:24.351281 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:33:24.537270 kubelet[2159]: I0120 00:33:24.510749 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:24.604215 kubelet[2159]: E0120 00:33:24.597747 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jan 20 00:33:24.747126 kubelet[2159]: E0120 00:33:24.746910 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:33:24.822439 kubelet[2159]: E0120 00:33:24.821988 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:33:24.964849 kubelet[2159]: E0120 00:33:24.952327 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c492a870a69da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:33:20.652823002 +0000 UTC m=+0.391909765,LastTimestamp:2026-01-20 00:33:20.652823002 +0000 UTC m=+0.391909765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:33:24.994941 systemd[1]: Started cri-containerd-39f567fd874a6f5f1c7dd0b03dd95f15fc867ae2aac6c6c02d04be41558d09ea.scope - libcontainer container 39f567fd874a6f5f1c7dd0b03dd95f15fc867ae2aac6c6c02d04be41558d09ea. Jan 20 00:33:24.997372 systemd[1]: Started cri-containerd-49f467e996899fa940900ddc3f4332c062b7e18b220bc2b34ecfd1899cba0947.scope - libcontainer container 49f467e996899fa940900ddc3f4332c062b7e18b220bc2b34ecfd1899cba0947. Jan 20 00:33:25.000486 systemd[1]: Started cri-containerd-d3fde0f3ac6e314bb5117e6bbdc2c5e78a12df0c0d808afbe639362d7c4c70a1.scope - libcontainer container d3fde0f3ac6e314bb5117e6bbdc2c5e78a12df0c0d808afbe639362d7c4c70a1. Jan 20 00:33:25.182279 containerd[1456]: time="2026-01-20T00:33:25.181588026Z" level=info msg="StartContainer for \"49f467e996899fa940900ddc3f4332c062b7e18b220bc2b34ecfd1899cba0947\" returns successfully" Jan 20 00:33:25.182279 containerd[1456]: time="2026-01-20T00:33:25.181695626Z" level=info msg="StartContainer for \"d3fde0f3ac6e314bb5117e6bbdc2c5e78a12df0c0d808afbe639362d7c4c70a1\" returns successfully" Jan 20 00:33:25.182279 containerd[1456]: time="2026-01-20T00:33:25.181699904Z" level=info msg="StartContainer for \"39f567fd874a6f5f1c7dd0b03dd95f15fc867ae2aac6c6c02d04be41558d09ea\" returns successfully" Jan 20 00:33:26.003480 kubelet[2159]: E0120 00:33:26.002060 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:26.003480 kubelet[2159]: E0120 00:33:26.002692 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:26.003480 kubelet[2159]: E0120 00:33:26.003116 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:26.003480 kubelet[2159]: E0120 00:33:26.003268 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:26.009358 kubelet[2159]: E0120 00:33:26.009010 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:26.009358 kubelet[2159]: E0120 00:33:26.009284 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:27.019187 kubelet[2159]: E0120 00:33:27.018907 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:27.024375 kubelet[2159]: E0120 00:33:27.024346 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:27.026887 kubelet[2159]: E0120 00:33:27.026373 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:27.027514 kubelet[2159]: E0120 00:33:27.027432 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:27.027966 kubelet[2159]: E0120 00:33:27.027944 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:27.029802 kubelet[2159]: E0120 00:33:27.029780 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:28.082263 kubelet[2159]: I0120 00:33:28.071448 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:28.168408 kubelet[2159]: E0120 00:33:28.168267 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:28.169757 kubelet[2159]: E0120 00:33:28.169727 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:28.169890 kubelet[2159]: E0120 00:33:28.169854 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:28.170137 kubelet[2159]: E0120 00:33:28.169982 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:30.158432 kubelet[2159]: E0120 00:33:30.157992 2159 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:33:30.232075 kubelet[2159]: I0120 00:33:30.231854 2159 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:33:30.232075 kubelet[2159]: E0120 00:33:30.231899 2159 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 00:33:30.264509 kubelet[2159]: I0120 00:33:30.264423 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:30.278105 kubelet[2159]: E0120 00:33:30.277995 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:30.278105 kubelet[2159]: I0120 00:33:30.278052 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:30.280435 kubelet[2159]: E0120 00:33:30.280359 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:30.280435 kubelet[2159]: I0120 00:33:30.280413 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:30.281986 kubelet[2159]: E0120 00:33:30.281937 2159 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:30.457591 kubelet[2159]: I0120 00:33:30.455010 2159 apiserver.go:52] "Watching apiserver" Jan 20 00:33:30.462387 kubelet[2159]: I0120 00:33:30.462224 2159 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:33:32.365717 kubelet[2159]: I0120 00:33:32.363885 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:32.391273 kubelet[2159]: E0120 00:33:32.390823 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.694366 kubelet[2159]: E0120 00:33:32.688466 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.312954 kubelet[2159]: I0120 00:33:34.312492 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:34.320078 kubelet[2159]: E0120 00:33:34.319984 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.578336 kubelet[2159]: I0120 00:33:34.576762 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:34.588305 kubelet[2159]: E0120 00:33:34.588255 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.700735 kubelet[2159]: E0120 00:33:34.700030 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.700735 kubelet[2159]: E0120 00:33:34.700576 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:35.204809 systemd[1]: Reloading requested from client PID 2453 ('systemctl') (unit session-7.scope)... Jan 20 00:33:35.204842 systemd[1]: Reloading... Jan 20 00:33:35.279730 zram_generator::config[2492]: No configuration found. Jan 20 00:33:35.646020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:33:35.743885 systemd[1]: Reloading finished in 538 ms. Jan 20 00:33:35.794559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:35.806763 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:33:35.807167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:35.807243 systemd[1]: kubelet.service: Consumed 6.593s CPU time, 138.3M memory peak, 0B memory swap peak. Jan 20 00:33:35.817926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:36.159272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:36.165926 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:36.234714 kubelet[2537]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:36.234714 kubelet[2537]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:33:36.234714 kubelet[2537]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:36.234714 kubelet[2537]: I0120 00:33:36.234451 2537 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:33:36.243984 kubelet[2537]: I0120 00:33:36.243936 2537 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:33:36.243984 kubelet[2537]: I0120 00:33:36.243971 2537 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:33:36.244225 kubelet[2537]: I0120 00:33:36.244197 2537 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:33:36.248013 kubelet[2537]: I0120 00:33:36.247968 2537 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 00:33:36.252685 kubelet[2537]: I0120 00:33:36.252579 2537 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:33:36.285530 kubelet[2537]: E0120 00:33:36.285380 2537 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:33:36.285530 kubelet[2537]: I0120 00:33:36.285502 2537 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:33:36.313705 kubelet[2537]: I0120 00:33:36.313556 2537 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:33:36.314762 kubelet[2537]: I0120 00:33:36.314215 2537 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:33:36.314762 kubelet[2537]: I0120 00:33:36.314287 2537 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:33:36.314762 kubelet[2537]: I0120 00:33:36.314512 2537 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:33:36.314762 kubelet[2537]: I0120 00:33:36.314522 2537 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:33:36.314762 kubelet[2537]: I0120 00:33:36.314757 2537 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:36.315177 kubelet[2537]: I0120 00:33:36.315076 2537 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:33:36.315177 kubelet[2537]: I0120 00:33:36.315092 2537 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:33:36.315177 kubelet[2537]: I0120 00:33:36.315116 2537 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:33:36.315177 kubelet[2537]: I0120 00:33:36.315132 2537 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:33:36.433118 kubelet[2537]: I0120 00:33:36.432026 2537 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:33:36.440506 kubelet[2537]: I0120 00:33:36.433336 2537 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:33:36.460561 kubelet[2537]: I0120 00:33:36.460494 2537 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:33:36.461317 kubelet[2537]: I0120 00:33:36.460737 2537 server.go:1289] "Started kubelet" Jan 20 00:33:36.462296 kubelet[2537]: I0120 00:33:36.462262 2537 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:33:36.463338 kubelet[2537]: I0120 00:33:36.463258 2537 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:33:36.464914 kubelet[2537]: I0120 00:33:36.464825 2537 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:33:36.465050 kubelet[2537]: I0120 00:33:36.464819 2537 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:33:36.465569 kubelet[2537]: I0120 00:33:36.465498 2537 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:33:36.465617 kubelet[2537]: I0120 00:33:36.465569 2537 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:33:36.465708 kubelet[2537]: I0120 00:33:36.465615 2537 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:33:36.469345 kubelet[2537]: I0120 00:33:36.466536 2537 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:33:36.480224 kubelet[2537]: I0120 00:33:36.470614 2537 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:33:36.480224 kubelet[2537]: I0120 00:33:36.479318 2537 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:33:36.480224 kubelet[2537]: I0120 00:33:36.479463 2537 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:33:36.496132 kubelet[2537]: E0120 00:33:36.495518 2537 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:33:36.497356 kubelet[2537]: I0120 00:33:36.497028 2537 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:33:36.524320 kubelet[2537]: I0120 00:33:36.524179 2537 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:33:36.526554 kubelet[2537]: I0120 00:33:36.526134 2537 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:33:36.526554 kubelet[2537]: I0120 00:33:36.526153 2537 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:33:36.526554 kubelet[2537]: I0120 00:33:36.526172 2537 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:33:36.526554 kubelet[2537]: I0120 00:33:36.526181 2537 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:33:36.526554 kubelet[2537]: E0120 00:33:36.526225 2537 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:33:36.551870 kubelet[2537]: I0120 00:33:36.551802 2537 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:33:36.551870 kubelet[2537]: I0120 00:33:36.551847 2537 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:33:36.551870 kubelet[2537]: I0120 00:33:36.551871 2537 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:36.552085 kubelet[2537]: I0120 00:33:36.552005 2537 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:33:36.552085 kubelet[2537]: I0120 00:33:36.552041 2537 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:33:36.552085 kubelet[2537]: I0120 00:33:36.552061 2537 policy_none.go:49] "None policy: Start" Jan 20 00:33:36.552085 kubelet[2537]: I0120 00:33:36.552072 2537 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:33:36.552085 kubelet[2537]: I0120 00:33:36.552083 2537 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:33:36.552253 kubelet[2537]: I0120 00:33:36.552212 2537 state_mem.go:75] "Updated machine memory state" Jan 20 00:33:36.559230 kubelet[2537]: E0120 00:33:36.558208 2537 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:33:36.559230 kubelet[2537]: I0120 00:33:36.558539 2537 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:33:36.559230 kubelet[2537]: I0120 00:33:36.558553 2537 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:33:36.559230 kubelet[2537]: I0120 00:33:36.559133 2537 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:33:36.562143 kubelet[2537]: E0120 00:33:36.561403 2537 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:33:36.627935 kubelet[2537]: I0120 00:33:36.627784 2537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:36.627935 kubelet[2537]: I0120 00:33:36.627828 2537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.628213 kubelet[2537]: I0120 00:33:36.627842 2537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:36.637801 kubelet[2537]: E0120 00:33:36.637736 2537 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:36.639167 kubelet[2537]: E0120 00:33:36.639047 2537 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:36.639167 kubelet[2537]: E0120 00:33:36.639153 2537 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.668338 kubelet[2537]: I0120 00:33:36.667383 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ba34fc72daeab8ea28e7a418c83523e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ba34fc72daeab8ea28e7a418c83523e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:36.668992 kubelet[2537]: I0120 00:33:36.668722 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ba34fc72daeab8ea28e7a418c83523e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ba34fc72daeab8ea28e7a418c83523e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:36.668992 kubelet[2537]: I0120 00:33:36.668755 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ba34fc72daeab8ea28e7a418c83523e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ba34fc72daeab8ea28e7a418c83523e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:36.668992 kubelet[2537]: I0120 00:33:36.668886 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.668992 kubelet[2537]: I0120 00:33:36.668929 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:36.669234 kubelet[2537]: I0120 00:33:36.669011 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.669234 kubelet[2537]: I0120 00:33:36.669038 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.669234 kubelet[2537]: I0120 00:33:36.669054 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.669234 kubelet[2537]: I0120 00:33:36.669070 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:36.670026 kubelet[2537]: I0120 00:33:36.669980 2537 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:36.681796 kubelet[2537]: I0120 00:33:36.681740 2537 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:33:36.681989 kubelet[2537]: I0120 00:33:36.681830 2537 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:33:36.938608 kubelet[2537]: E0120 00:33:36.938420 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:36.940039 kubelet[2537]: E0120 00:33:36.939771 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:36.940039 kubelet[2537]: E0120 00:33:36.939966 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.371496 kubelet[2537]: I0120 00:33:37.371341 2537 apiserver.go:52] "Watching apiserver" Jan 20 00:33:37.468881 kubelet[2537]: I0120 00:33:37.468188 2537 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:33:37.543530 kubelet[2537]: E0120 00:33:37.542731 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.543530 kubelet[2537]: I0120 00:33:37.543201 2537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:37.543530 kubelet[2537]: E0120 00:33:37.543482 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.553268 kubelet[2537]: E0120 00:33:37.553009 2537 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:37.553268 kubelet[2537]: E0120 00:33:37.553189 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.592193 kubelet[2537]: I0120 00:33:37.592086 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.59203898 podStartE2EDuration="5.59203898s" podCreationTimestamp="2026-01-20 00:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:37.578118847 +0000 UTC m=+1.406571289" watchObservedRunningTime="2026-01-20 00:33:37.59203898 +0000 UTC m=+1.420491461" Jan 20 00:33:37.608461 kubelet[2537]: I0120 00:33:37.607704 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.60757806 podStartE2EDuration="3.60757806s" podCreationTimestamp="2026-01-20 00:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:37.592271291 +0000 UTC m=+1.420723753" watchObservedRunningTime="2026-01-20 00:33:37.60757806 +0000 UTC m=+1.436030512" Jan 20 00:33:37.619857 kubelet[2537]: I0120 00:33:37.619795 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.619779886 podStartE2EDuration="3.619779886s" podCreationTimestamp="2026-01-20 00:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:37.607573685 +0000 UTC m=+1.436026147" watchObservedRunningTime="2026-01-20 00:33:37.619779886 +0000 UTC m=+1.448232327" Jan 20 00:33:37.662548 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 20 00:33:37.666269 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 20 00:33:37.671908 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:43916.service: Deactivated successfully. Jan 20 00:33:37.674416 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:33:37.674696 systemd[1]: session-7.scope: Consumed 8.524s CPU time, 165.1M memory peak, 0B memory swap peak. Jan 20 00:33:37.675518 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:33:37.677310 systemd-logind[1436]: Removed session 7. Jan 20 00:33:38.599759 kubelet[2537]: E0120 00:33:38.598760 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:38.601279 kubelet[2537]: E0120 00:33:38.600941 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:39.595953 kubelet[2537]: E0120 00:33:39.595775 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:39.595953 kubelet[2537]: E0120 00:33:39.595866 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:40.598281 kubelet[2537]: E0120 00:33:40.598164 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:41.571535 kubelet[2537]: I0120 00:33:41.571329 2537 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:33:41.572561 containerd[1456]: time="2026-01-20T00:33:41.572250095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:33:41.573431 kubelet[2537]: I0120 00:33:41.573203 2537 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:33:42.572089 systemd[1]: Created slice kubepods-burstable-poda753a1a6_e579_4c00_8b77_2d72bd0ffbc5.slice - libcontainer container kubepods-burstable-poda753a1a6_e579_4c00_8b77_2d72bd0ffbc5.slice. Jan 20 00:33:42.585180 systemd[1]: Created slice kubepods-besteffort-podc9c13cdb_e2cf_467b_996d_6d8cbe8ad38a.slice - libcontainer container kubepods-besteffort-podc9c13cdb_e2cf_467b_996d_6d8cbe8ad38a.slice. Jan 20 00:33:42.747855 kubelet[2537]: I0120 00:33:42.747210 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a753a1a6-e579-4c00-8b77-2d72bd0ffbc5-run\") pod \"kube-flannel-ds-vp4qx\" (UID: \"a753a1a6-e579-4c00-8b77-2d72bd0ffbc5\") " pod="kube-flannel/kube-flannel-ds-vp4qx" Jan 20 00:33:42.747855 kubelet[2537]: I0120 00:33:42.747784 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a753a1a6-e579-4c00-8b77-2d72bd0ffbc5-cni-plugin\") pod \"kube-flannel-ds-vp4qx\" (UID: \"a753a1a6-e579-4c00-8b77-2d72bd0ffbc5\") " pod="kube-flannel/kube-flannel-ds-vp4qx" Jan 20 00:33:42.747855 kubelet[2537]: I0120 00:33:42.747874 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a753a1a6-e579-4c00-8b77-2d72bd0ffbc5-flannel-cfg\") pod \"kube-flannel-ds-vp4qx\" (UID: \"a753a1a6-e579-4c00-8b77-2d72bd0ffbc5\") " pod="kube-flannel/kube-flannel-ds-vp4qx" Jan 20 00:33:42.747855 kubelet[2537]: I0120 00:33:42.747904 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a753a1a6-e579-4c00-8b77-2d72bd0ffbc5-xtables-lock\") pod \"kube-flannel-ds-vp4qx\" (UID: \"a753a1a6-e579-4c00-8b77-2d72bd0ffbc5\") " pod="kube-flannel/kube-flannel-ds-vp4qx" Jan 20 00:33:42.747855 kubelet[2537]: I0120 00:33:42.747925 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a-xtables-lock\") pod \"kube-proxy-n2thm\" (UID: \"c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a\") " pod="kube-system/kube-proxy-n2thm" Jan 20 00:33:42.751559 kubelet[2537]: I0120 00:33:42.747941 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a-lib-modules\") pod \"kube-proxy-n2thm\" (UID: \"c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a\") " pod="kube-system/kube-proxy-n2thm" Jan 20 00:33:42.751559 kubelet[2537]: I0120 00:33:42.748090 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a753a1a6-e579-4c00-8b77-2d72bd0ffbc5-cni\") pod \"kube-flannel-ds-vp4qx\" (UID: \"a753a1a6-e579-4c00-8b77-2d72bd0ffbc5\") " pod="kube-flannel/kube-flannel-ds-vp4qx" Jan 20 00:33:42.751559 kubelet[2537]: I0120 00:33:42.748110 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vrq\" (UniqueName: \"kubernetes.io/projected/a753a1a6-e579-4c00-8b77-2d72bd0ffbc5-kube-api-access-w9vrq\") pod \"kube-flannel-ds-vp4qx\" (UID: \"a753a1a6-e579-4c00-8b77-2d72bd0ffbc5\") " pod="kube-flannel/kube-flannel-ds-vp4qx" Jan 20 00:33:42.751559 kubelet[2537]: I0120 00:33:42.748126 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a-kube-proxy\") pod \"kube-proxy-n2thm\" (UID: \"c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a\") " pod="kube-system/kube-proxy-n2thm" Jan 20 00:33:42.751559 kubelet[2537]: I0120 00:33:42.748149 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhbd\" (UniqueName: \"kubernetes.io/projected/c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a-kube-api-access-hkhbd\") pod \"kube-proxy-n2thm\" (UID: \"c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a\") " pod="kube-system/kube-proxy-n2thm" Jan 20 00:33:42.841527 kubelet[2537]: E0120 00:33:42.841327 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:42.880288 kubelet[2537]: E0120 00:33:42.880183 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:42.881939 containerd[1456]: time="2026-01-20T00:33:42.881126761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vp4qx,Uid:a753a1a6-e579-4c00-8b77-2d72bd0ffbc5,Namespace:kube-flannel,Attempt:0,}" Jan 20 00:33:42.895401 kubelet[2537]: E0120 00:33:42.895333 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:42.895991 containerd[1456]: time="2026-01-20T00:33:42.895913328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2thm,Uid:c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:42.941756 containerd[1456]: time="2026-01-20T00:33:42.941483049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:42.941756 containerd[1456]: time="2026-01-20T00:33:42.941562737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:42.941756 containerd[1456]: time="2026-01-20T00:33:42.941582584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:42.944379 containerd[1456]: time="2026-01-20T00:33:42.944209550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:42.973084 containerd[1456]: time="2026-01-20T00:33:42.971559352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:42.973084 containerd[1456]: time="2026-01-20T00:33:42.972804866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:42.973084 containerd[1456]: time="2026-01-20T00:33:42.972826767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:42.973084 containerd[1456]: time="2026-01-20T00:33:42.972960426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:42.993898 systemd[1]: Started cri-containerd-8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3.scope - libcontainer container 8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3. Jan 20 00:33:43.012847 systemd[1]: Started cri-containerd-b4a3c2ae4da1764a58fe2f232e5af65ff323e949765c8ebda1657de6f1644b60.scope - libcontainer container b4a3c2ae4da1764a58fe2f232e5af65ff323e949765c8ebda1657de6f1644b60. Jan 20 00:33:43.060582 containerd[1456]: time="2026-01-20T00:33:43.060499622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2thm,Uid:c9c13cdb-e2cf-467b-996d-6d8cbe8ad38a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4a3c2ae4da1764a58fe2f232e5af65ff323e949765c8ebda1657de6f1644b60\"" Jan 20 00:33:43.062052 kubelet[2537]: E0120 00:33:43.061516 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:43.067705 containerd[1456]: time="2026-01-20T00:33:43.067560475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vp4qx,Uid:a753a1a6-e579-4c00-8b77-2d72bd0ffbc5,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\"" Jan 20 00:33:43.068771 kubelet[2537]: E0120 00:33:43.068747 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:43.070366 containerd[1456]: time="2026-01-20T00:33:43.070250745Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 20 00:33:43.073867 containerd[1456]: time="2026-01-20T00:33:43.073707527Z" level=info msg="CreateContainer within sandbox \"b4a3c2ae4da1764a58fe2f232e5af65ff323e949765c8ebda1657de6f1644b60\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:33:43.099008 containerd[1456]: time="2026-01-20T00:33:43.098840863Z" level=info msg="CreateContainer within sandbox \"b4a3c2ae4da1764a58fe2f232e5af65ff323e949765c8ebda1657de6f1644b60\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c904c049044813f6f71fd5e4690b2b7839f584cf72a8d9c81b06a4c46d09052\"" Jan 20 00:33:43.101376 containerd[1456]: time="2026-01-20T00:33:43.100121021Z" level=info msg="StartContainer for \"8c904c049044813f6f71fd5e4690b2b7839f584cf72a8d9c81b06a4c46d09052\"" Jan 20 00:33:43.148874 systemd[1]: Started cri-containerd-8c904c049044813f6f71fd5e4690b2b7839f584cf72a8d9c81b06a4c46d09052.scope - libcontainer container 8c904c049044813f6f71fd5e4690b2b7839f584cf72a8d9c81b06a4c46d09052. Jan 20 00:33:43.185719 containerd[1456]: time="2026-01-20T00:33:43.185680695Z" level=info msg="StartContainer for \"8c904c049044813f6f71fd5e4690b2b7839f584cf72a8d9c81b06a4c46d09052\" returns successfully" Jan 20 00:33:43.627899 kubelet[2537]: E0120 00:33:43.627790 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:43.628318 kubelet[2537]: E0120 00:33:43.628277 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:44.379591 kubelet[2537]: I0120 00:33:44.379514 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n2thm" podStartSLOduration=2.379456382 podStartE2EDuration="2.379456382s" podCreationTimestamp="2026-01-20 00:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:44.377189517 +0000 UTC m=+8.205641959" watchObservedRunningTime="2026-01-20 00:33:44.379456382 +0000 UTC m=+8.207908824" Jan 20 00:33:44.516276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046547674.mount: Deactivated successfully. Jan 20 00:33:44.585547 containerd[1456]: time="2026-01-20T00:33:44.585445979Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:44.586426 containerd[1456]: time="2026-01-20T00:33:44.586376058Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 20 00:33:44.588092 containerd[1456]: time="2026-01-20T00:33:44.588016530Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:44.591200 containerd[1456]: time="2026-01-20T00:33:44.591139918Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:44.592535 containerd[1456]: time="2026-01-20T00:33:44.592466924Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.522169883s" Jan 20 00:33:44.592535 containerd[1456]: time="2026-01-20T00:33:44.592505696Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 20 00:33:44.598812 containerd[1456]: time="2026-01-20T00:33:44.598751781Z" level=info msg="CreateContainer within sandbox \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 00:33:44.615845 containerd[1456]: time="2026-01-20T00:33:44.615757147Z" level=info msg="CreateContainer within sandbox \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727\"" Jan 20 00:33:44.616479 containerd[1456]: time="2026-01-20T00:33:44.616417840Z" level=info msg="StartContainer for \"2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727\"" Jan 20 00:33:44.685903 systemd[1]: Started cri-containerd-2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727.scope - libcontainer container 2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727. Jan 20 00:33:44.732726 containerd[1456]: time="2026-01-20T00:33:44.732249826Z" level=info msg="StartContainer for \"2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727\" returns successfully" Jan 20 00:33:44.733416 systemd[1]: cri-containerd-2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727.scope: Deactivated successfully. Jan 20 00:33:44.802737 containerd[1456]: time="2026-01-20T00:33:44.802552312Z" level=info msg="shim disconnected" id=2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727 namespace=k8s.io Jan 20 00:33:44.802737 containerd[1456]: time="2026-01-20T00:33:44.802691191Z" level=warning msg="cleaning up after shim disconnected" id=2ea22952a7bccd4cd64fc460a98b09d9303529a2e59bed35f8e91de49112a727 namespace=k8s.io Jan 20 00:33:44.802737 containerd[1456]: time="2026-01-20T00:33:44.802708043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:45.647379 kubelet[2537]: E0120 00:33:45.647287 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:45.649354 containerd[1456]: time="2026-01-20T00:33:45.649243177Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 20 00:33:48.592002 containerd[1456]: time="2026-01-20T00:33:48.591912876Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:48.593212 containerd[1456]: time="2026-01-20T00:33:48.593107594Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 20 00:33:48.594723 containerd[1456]: time="2026-01-20T00:33:48.594563373Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:48.600447 containerd[1456]: time="2026-01-20T00:33:48.600355116Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:48.602169 containerd[1456]: time="2026-01-20T00:33:48.602136806Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.952827856s" Jan 20 00:33:48.602169 containerd[1456]: time="2026-01-20T00:33:48.602167533Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 20 00:33:48.609076 containerd[1456]: time="2026-01-20T00:33:48.608970630Z" level=info msg="CreateContainer within sandbox \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:33:48.633340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479271589.mount: Deactivated successfully. Jan 20 00:33:48.635685 containerd[1456]: time="2026-01-20T00:33:48.635536914Z" level=info msg="CreateContainer within sandbox \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f\"" Jan 20 00:33:48.636324 containerd[1456]: time="2026-01-20T00:33:48.636273721Z" level=info msg="StartContainer for \"579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f\"" Jan 20 00:33:48.691875 systemd[1]: Started cri-containerd-579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f.scope - libcontainer container 579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f. Jan 20 00:33:48.745017 systemd[1]: cri-containerd-579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f.scope: Deactivated successfully. Jan 20 00:33:48.795160 kubelet[2537]: I0120 00:33:48.795092 2537 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:33:48.801323 containerd[1456]: time="2026-01-20T00:33:48.801260046Z" level=info msg="StartContainer for \"579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f\" returns successfully" Jan 20 00:33:48.834102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f-rootfs.mount: Deactivated successfully. Jan 20 00:33:48.839314 containerd[1456]: time="2026-01-20T00:33:48.839236233Z" level=info msg="shim disconnected" id=579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f namespace=k8s.io Jan 20 00:33:48.839439 containerd[1456]: time="2026-01-20T00:33:48.839316884Z" level=warning msg="cleaning up after shim disconnected" id=579934eca7f45c7999ddc89c8f47c7e3cea84b8b44fb90fbf44a0b82f12c210f namespace=k8s.io Jan 20 00:33:48.839439 containerd[1456]: time="2026-01-20T00:33:48.839334546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:48.859863 systemd[1]: Created slice kubepods-burstable-pod6fbab862_bce8_4e57_805f_da3b742c9d0b.slice - libcontainer container kubepods-burstable-pod6fbab862_bce8_4e57_805f_da3b742c9d0b.slice. Jan 20 00:33:48.868006 systemd[1]: Created slice kubepods-burstable-pod00e64071_55b1_4f78_bf91_c285a0250c71.slice - libcontainer container kubepods-burstable-pod00e64071_55b1_4f78_bf91_c285a0250c71.slice. Jan 20 00:33:48.970202 kubelet[2537]: I0120 00:33:48.969112 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4jf\" (UniqueName: \"kubernetes.io/projected/6fbab862-bce8-4e57-805f-da3b742c9d0b-kube-api-access-qc4jf\") pod \"coredns-674b8bbfcf-t2g4l\" (UID: \"6fbab862-bce8-4e57-805f-da3b742c9d0b\") " pod="kube-system/coredns-674b8bbfcf-t2g4l" Jan 20 00:33:48.970202 kubelet[2537]: I0120 00:33:48.969814 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbab862-bce8-4e57-805f-da3b742c9d0b-config-volume\") pod \"coredns-674b8bbfcf-t2g4l\" (UID: \"6fbab862-bce8-4e57-805f-da3b742c9d0b\") " pod="kube-system/coredns-674b8bbfcf-t2g4l" Jan 20 00:33:48.970202 kubelet[2537]: I0120 00:33:48.969903 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00e64071-55b1-4f78-bf91-c285a0250c71-config-volume\") pod \"coredns-674b8bbfcf-7f2cx\" (UID: \"00e64071-55b1-4f78-bf91-c285a0250c71\") " pod="kube-system/coredns-674b8bbfcf-7f2cx" Jan 20 00:33:48.970202 kubelet[2537]: I0120 00:33:48.969919 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtkfv\" (UniqueName: \"kubernetes.io/projected/00e64071-55b1-4f78-bf91-c285a0250c71-kube-api-access-jtkfv\") pod \"coredns-674b8bbfcf-7f2cx\" (UID: \"00e64071-55b1-4f78-bf91-c285a0250c71\") " pod="kube-system/coredns-674b8bbfcf-7f2cx" Jan 20 00:33:49.167580 kubelet[2537]: E0120 00:33:49.167227 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:49.171220 containerd[1456]: time="2026-01-20T00:33:49.171106685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2g4l,Uid:6fbab862-bce8-4e57-805f-da3b742c9d0b,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:49.173699 kubelet[2537]: E0120 00:33:49.173520 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:49.174313 containerd[1456]: time="2026-01-20T00:33:49.174261299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7f2cx,Uid:00e64071-55b1-4f78-bf91-c285a0250c71,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:49.261109 containerd[1456]: time="2026-01-20T00:33:49.261024744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7f2cx,Uid:00e64071-55b1-4f78-bf91-c285a0250c71,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"177ac7efd4d25a13a93e85c1277556b2e2a5601fc3d519f4819c71e76301603a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 00:33:49.261765 kubelet[2537]: E0120 00:33:49.261616 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ac7efd4d25a13a93e85c1277556b2e2a5601fc3d519f4819c71e76301603a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 00:33:49.263415 containerd[1456]: time="2026-01-20T00:33:49.262189003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2g4l,Uid:6fbab862-bce8-4e57-805f-da3b742c9d0b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59d22c526f65156ceaec5159f0aa800205ee21db8b466cfa3927e99434f193a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 00:33:49.263520 kubelet[2537]: E0120 00:33:49.262295 2537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ac7efd4d25a13a93e85c1277556b2e2a5601fc3d519f4819c71e76301603a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-7f2cx" Jan 20 00:33:49.263520 kubelet[2537]: E0120 00:33:49.262409 2537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ac7efd4d25a13a93e85c1277556b2e2a5601fc3d519f4819c71e76301603a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-7f2cx" Jan 20 00:33:49.263520 kubelet[2537]: E0120 00:33:49.262548 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d22c526f65156ceaec5159f0aa800205ee21db8b466cfa3927e99434f193a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 00:33:49.263520 kubelet[2537]: E0120 00:33:49.262599 2537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d22c526f65156ceaec5159f0aa800205ee21db8b466cfa3927e99434f193a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-t2g4l" Jan 20 00:33:49.263746 kubelet[2537]: E0120 00:33:49.262711 2537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7f2cx_kube-system(00e64071-55b1-4f78-bf91-c285a0250c71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7f2cx_kube-system(00e64071-55b1-4f78-bf91-c285a0250c71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"177ac7efd4d25a13a93e85c1277556b2e2a5601fc3d519f4819c71e76301603a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-7f2cx" podUID="00e64071-55b1-4f78-bf91-c285a0250c71" Jan 20 00:33:49.263746 kubelet[2537]: E0120 00:33:49.262701 2537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d22c526f65156ceaec5159f0aa800205ee21db8b466cfa3927e99434f193a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-t2g4l" Jan 20 00:33:49.263746 kubelet[2537]: E0120 00:33:49.262805 2537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t2g4l_kube-system(6fbab862-bce8-4e57-805f-da3b742c9d0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t2g4l_kube-system(6fbab862-bce8-4e57-805f-da3b742c9d0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59d22c526f65156ceaec5159f0aa800205ee21db8b466cfa3927e99434f193a1\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-t2g4l" podUID="6fbab862-bce8-4e57-805f-da3b742c9d0b" Jan 20 00:33:49.690011 kubelet[2537]: E0120 00:33:49.689900 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:49.699058 containerd[1456]: time="2026-01-20T00:33:49.698974061Z" level=info msg="CreateContainer within sandbox \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 00:33:49.736226 containerd[1456]: time="2026-01-20T00:33:49.736136038Z" level=info msg="CreateContainer within sandbox \"8f725990e18c02b0863301c7f2b0cbd44690ea19e0ed1ad0c9bd74e4a89e5fe3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a194bc1b63a312a7f41cd1e76218096f7770eb7721dc4bc733b6393dd00f90c4\"" Jan 20 00:33:49.737213 containerd[1456]: time="2026-01-20T00:33:49.737142712Z" level=info msg="StartContainer for \"a194bc1b63a312a7f41cd1e76218096f7770eb7721dc4bc733b6393dd00f90c4\"" Jan 20 00:33:49.782887 systemd[1]: Started cri-containerd-a194bc1b63a312a7f41cd1e76218096f7770eb7721dc4bc733b6393dd00f90c4.scope - libcontainer container a194bc1b63a312a7f41cd1e76218096f7770eb7721dc4bc733b6393dd00f90c4. Jan 20 00:33:49.819319 containerd[1456]: time="2026-01-20T00:33:49.819239915Z" level=info msg="StartContainer for \"a194bc1b63a312a7f41cd1e76218096f7770eb7721dc4bc733b6393dd00f90c4\" returns successfully" Jan 20 00:33:50.696691 kubelet[2537]: E0120 00:33:50.696538 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:50.714753 kubelet[2537]: I0120 00:33:50.714589 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-vp4qx" podStartSLOduration=3.1803850479999998 podStartE2EDuration="8.714564493s" podCreationTimestamp="2026-01-20 00:33:42 +0000 UTC" firstStartedPulling="2026-01-20 00:33:43.069609399 +0000 UTC m=+6.898061840" lastFinishedPulling="2026-01-20 00:33:48.603788833 +0000 UTC m=+12.432241285" observedRunningTime="2026-01-20 00:33:50.713063772 +0000 UTC m=+14.541516224" watchObservedRunningTime="2026-01-20 00:33:50.714564493 +0000 UTC m=+14.543016955" Jan 20 00:33:50.922616 systemd-networkd[1389]: flannel.1: Link UP Jan 20 00:33:50.922844 systemd-networkd[1389]: flannel.1: Gained carrier Jan 20 00:33:51.698457 kubelet[2537]: E0120 00:33:51.698364 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:51.983964 systemd-networkd[1389]: flannel.1: Gained IPv6LL Jan 20 00:34:01.536402 kubelet[2537]: E0120 00:34:01.535445 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:01.539493 containerd[1456]: time="2026-01-20T00:34:01.539364320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7f2cx,Uid:00e64071-55b1-4f78-bf91-c285a0250c71,Namespace:kube-system,Attempt:0,}" Jan 20 00:34:01.578044 systemd-networkd[1389]: cni0: Link UP Jan 20 00:34:01.578055 systemd-networkd[1389]: cni0: Gained carrier Jan 20 00:34:01.586224 systemd-networkd[1389]: cni0: Lost carrier Jan 20 00:34:01.590994 systemd-networkd[1389]: veth431a5dc1: Link UP Jan 20 00:34:01.596720 kernel: cni0: port 1(veth431a5dc1) entered blocking state Jan 20 00:34:01.596828 kernel: cni0: port 1(veth431a5dc1) entered disabled state Jan 20 00:34:01.596859 kernel: veth431a5dc1: entered allmulticast mode Jan 20 00:34:01.600514 kernel: veth431a5dc1: entered promiscuous mode Jan 20 00:34:01.600806 kernel: cni0: port 1(veth431a5dc1) entered blocking state Jan 20 00:34:01.605495 kernel: cni0: port 1(veth431a5dc1) entered forwarding state Jan 20 00:34:01.608748 kernel: cni0: port 1(veth431a5dc1) entered disabled state Jan 20 00:34:01.625029 kernel: cni0: port 1(veth431a5dc1) entered blocking state Jan 20 00:34:01.625108 kernel: cni0: port 1(veth431a5dc1) entered forwarding state Jan 20 00:34:01.625085 systemd-networkd[1389]: veth431a5dc1: Gained carrier Jan 20 00:34:01.626232 systemd-networkd[1389]: cni0: Gained carrier Jan 20 00:34:01.631068 containerd[1456]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 20 00:34:01.631068 containerd[1456]: delegateAdd: netconf sent to delegate plugin: Jan 20 00:34:01.682150 containerd[1456]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T00:34:01.681749628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:01.682150 containerd[1456]: time="2026-01-20T00:34:01.681920948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:01.682150 containerd[1456]: time="2026-01-20T00:34:01.681965351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:01.682721 containerd[1456]: time="2026-01-20T00:34:01.682349358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:01.718202 systemd[1]: Started cri-containerd-0f6c4d4d0fc2f25ab73c03df57f758b5a502fa6fecae0bb7884d085bbb853aee.scope - libcontainer container 0f6c4d4d0fc2f25ab73c03df57f758b5a502fa6fecae0bb7884d085bbb853aee. Jan 20 00:34:01.751586 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:01.784248 containerd[1456]: time="2026-01-20T00:34:01.784098496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7f2cx,Uid:00e64071-55b1-4f78-bf91-c285a0250c71,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f6c4d4d0fc2f25ab73c03df57f758b5a502fa6fecae0bb7884d085bbb853aee\"" Jan 20 00:34:01.785328 kubelet[2537]: E0120 00:34:01.785301 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:01.791292 containerd[1456]: time="2026-01-20T00:34:01.791138514Z" level=info msg="CreateContainer within sandbox \"0f6c4d4d0fc2f25ab73c03df57f758b5a502fa6fecae0bb7884d085bbb853aee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:34:01.809096 containerd[1456]: time="2026-01-20T00:34:01.809012249Z" level=info msg="CreateContainer within sandbox \"0f6c4d4d0fc2f25ab73c03df57f758b5a502fa6fecae0bb7884d085bbb853aee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f1fb6bfba5d02d1809d4499e57a648e61e7240067d801ab425e20adf50e7429\"" Jan 20 00:34:01.809727 containerd[1456]: time="2026-01-20T00:34:01.809573610Z" level=info msg="StartContainer for \"5f1fb6bfba5d02d1809d4499e57a648e61e7240067d801ab425e20adf50e7429\"" Jan 20 00:34:01.874052 systemd[1]: Started cri-containerd-5f1fb6bfba5d02d1809d4499e57a648e61e7240067d801ab425e20adf50e7429.scope - libcontainer container 5f1fb6bfba5d02d1809d4499e57a648e61e7240067d801ab425e20adf50e7429. Jan 20 00:34:01.915755 containerd[1456]: time="2026-01-20T00:34:01.915623211Z" level=info msg="StartContainer for \"5f1fb6bfba5d02d1809d4499e57a648e61e7240067d801ab425e20adf50e7429\" returns successfully" Jan 20 00:34:02.531067 kubelet[2537]: E0120 00:34:02.530970 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:02.531599 containerd[1456]: time="2026-01-20T00:34:02.531460326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2g4l,Uid:6fbab862-bce8-4e57-805f-da3b742c9d0b,Namespace:kube-system,Attempt:0,}" Jan 20 00:34:02.563967 systemd-networkd[1389]: vethc193784f: Link UP Jan 20 00:34:02.571489 kernel: cni0: port 2(vethc193784f) entered blocking state Jan 20 00:34:02.571564 kernel: cni0: port 2(vethc193784f) entered disabled state Jan 20 00:34:02.571586 kernel: vethc193784f: entered allmulticast mode Jan 20 00:34:02.575482 kernel: vethc193784f: entered promiscuous mode Jan 20 00:34:02.575553 kernel: cni0: port 2(vethc193784f) entered blocking state Jan 20 00:34:02.575588 kernel: cni0: port 2(vethc193784f) entered forwarding state Jan 20 00:34:02.586184 systemd-networkd[1389]: vethc193784f: Gained carrier Jan 20 00:34:02.589174 containerd[1456]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000112950), "name":"cbr0", "type":"bridge"} Jan 20 00:34:02.589174 containerd[1456]: delegateAdd: netconf sent to delegate plugin: Jan 20 00:34:02.657257 containerd[1456]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T00:34:02.657050134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:02.657257 containerd[1456]: time="2026-01-20T00:34:02.657111248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:02.657257 containerd[1456]: time="2026-01-20T00:34:02.657125725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:02.657257 containerd[1456]: time="2026-01-20T00:34:02.657247182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:02.695905 systemd[1]: Started cri-containerd-0121320d857cafb45a9fb099c493ab3452a918c78bfbdb398af6b2b9065802aa.scope - libcontainer container 0121320d857cafb45a9fb099c493ab3452a918c78bfbdb398af6b2b9065802aa. Jan 20 00:34:02.710345 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:02.744852 containerd[1456]: time="2026-01-20T00:34:02.744779635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2g4l,Uid:6fbab862-bce8-4e57-805f-da3b742c9d0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0121320d857cafb45a9fb099c493ab3452a918c78bfbdb398af6b2b9065802aa\"" Jan 20 00:34:02.745794 kubelet[2537]: E0120 00:34:02.745762 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:02.751451 containerd[1456]: time="2026-01-20T00:34:02.751331751Z" level=info msg="CreateContainer within sandbox \"0121320d857cafb45a9fb099c493ab3452a918c78bfbdb398af6b2b9065802aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:34:02.753744 kubelet[2537]: E0120 00:34:02.751925 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:02.796796 kubelet[2537]: I0120 00:34:02.795874 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7f2cx" podStartSLOduration=20.79585655 podStartE2EDuration="20.79585655s" podCreationTimestamp="2026-01-20 00:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:34:02.795130098 +0000 UTC m=+26.623582540" watchObservedRunningTime="2026-01-20 00:34:02.79585655 +0000 UTC m=+26.624309012" Jan 20 00:34:02.797255 containerd[1456]: time="2026-01-20T00:34:02.797188823Z" level=info msg="CreateContainer within sandbox \"0121320d857cafb45a9fb099c493ab3452a918c78bfbdb398af6b2b9065802aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bed1e56c4b99575dfd404520238f7cda8e490966f83e9caee824c3b75f3fab8\"" Jan 20 00:34:02.798464 containerd[1456]: time="2026-01-20T00:34:02.798391130Z" level=info msg="StartContainer for \"5bed1e56c4b99575dfd404520238f7cda8e490966f83e9caee824c3b75f3fab8\"" Jan 20 00:34:02.853148 systemd[1]: Started cri-containerd-5bed1e56c4b99575dfd404520238f7cda8e490966f83e9caee824c3b75f3fab8.scope - libcontainer container 5bed1e56c4b99575dfd404520238f7cda8e490966f83e9caee824c3b75f3fab8. Jan 20 00:34:02.890167 containerd[1456]: time="2026-01-20T00:34:02.890042787Z" level=info msg="StartContainer for \"5bed1e56c4b99575dfd404520238f7cda8e490966f83e9caee824c3b75f3fab8\" returns successfully" Jan 20 00:34:02.938152 systemd-networkd[1389]: veth431a5dc1: Gained IPv6LL Jan 20 00:34:03.055999 systemd-networkd[1389]: cni0: Gained IPv6LL Jan 20 00:34:03.761498 kubelet[2537]: E0120 00:34:03.759169 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:03.761498 kubelet[2537]: E0120 00:34:03.759500 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:03.778374 kubelet[2537]: I0120 00:34:03.778258 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t2g4l" podStartSLOduration=21.778238447 podStartE2EDuration="21.778238447s" podCreationTimestamp="2026-01-20 00:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:34:03.777053966 +0000 UTC m=+27.605506409" watchObservedRunningTime="2026-01-20 00:34:03.778238447 +0000 UTC m=+27.606690888" Jan 20 00:34:04.144032 systemd-networkd[1389]: vethc193784f: Gained IPv6LL Jan 20 00:34:04.761891 kubelet[2537]: E0120 00:34:04.761805 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:04.762345 kubelet[2537]: E0120 00:34:04.761937 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:05.766139 kubelet[2537]: E0120 00:34:05.766027 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:31.645446 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:59770.service - OpenSSH per-connection server daemon (10.0.0.1:59770). Jan 20 00:34:31.686815 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 59770 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:31.688601 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:31.694106 systemd-logind[1436]: New session 8 of user core. Jan 20 00:34:31.703842 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:34:31.827980 sshd[3572]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:31.832330 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:59770.service: Deactivated successfully. Jan 20 00:34:31.834867 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:34:31.835531 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:34:31.836884 systemd-logind[1436]: Removed session 8. Jan 20 00:34:36.841413 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:46480.service - OpenSSH per-connection server daemon (10.0.0.1:46480). Jan 20 00:34:36.877801 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 46480 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:36.879603 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:36.884781 systemd-logind[1436]: New session 9 of user core. Jan 20 00:34:36.894885 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:34:37.008415 sshd[3610]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:37.013750 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:46480.service: Deactivated successfully. Jan 20 00:34:37.016492 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:34:37.017761 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:34:37.019100 systemd-logind[1436]: Removed session 9. Jan 20 00:34:42.022183 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:46484.service - OpenSSH per-connection server daemon (10.0.0.1:46484). Jan 20 00:34:42.070976 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 46484 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:42.073305 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:42.078790 systemd-logind[1436]: New session 10 of user core. Jan 20 00:34:42.087946 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:34:42.218198 sshd[3645]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:42.223042 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:46484.service: Deactivated successfully. Jan 20 00:34:42.226455 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:34:42.228144 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:34:42.231297 systemd-logind[1436]: Removed session 10. Jan 20 00:34:46.529018 kubelet[2537]: E0120 00:34:46.528502 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:47.244515 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:39706.service - OpenSSH per-connection server daemon (10.0.0.1:39706). Jan 20 00:34:47.296486 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:47.298860 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:47.304627 systemd-logind[1436]: New session 11 of user core. Jan 20 00:34:47.313326 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:34:47.459950 sshd[3683]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:47.470008 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:39706.service: Deactivated successfully. Jan 20 00:34:47.472456 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:34:47.474574 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:34:47.481111 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:39722.service - OpenSSH per-connection server daemon (10.0.0.1:39722). Jan 20 00:34:47.482344 systemd-logind[1436]: Removed session 11. Jan 20 00:34:47.517815 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 39722 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:47.519523 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:47.525420 systemd-logind[1436]: New session 12 of user core. Jan 20 00:34:47.537907 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:34:47.714079 sshd[3698]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:47.727085 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:39722.service: Deactivated successfully. Jan 20 00:34:47.730540 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:34:47.739037 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:34:47.746163 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:39734.service - OpenSSH per-connection server daemon (10.0.0.1:39734). Jan 20 00:34:47.748138 systemd-logind[1436]: Removed session 12. Jan 20 00:34:47.786356 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 39734 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:47.788188 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:47.794892 systemd-logind[1436]: New session 13 of user core. Jan 20 00:34:47.801962 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:34:47.945251 sshd[3710]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:47.949077 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:39734.service: Deactivated successfully. Jan 20 00:34:47.951592 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:34:47.953809 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:34:47.955557 systemd-logind[1436]: Removed session 13. Jan 20 00:34:52.962563 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:52546.service - OpenSSH per-connection server daemon (10.0.0.1:52546). Jan 20 00:34:53.020622 sshd[3765]: Accepted publickey for core from 10.0.0.1 port 52546 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:53.023910 sshd[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:53.034806 systemd-logind[1436]: New session 14 of user core. Jan 20 00:34:53.050018 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:34:53.201065 sshd[3765]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:53.205481 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:52546.service: Deactivated successfully. Jan 20 00:34:53.209056 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:34:53.211405 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:34:53.213399 systemd-logind[1436]: Removed session 14. Jan 20 00:34:55.426463 update_engine[1438]: I20260120 00:34:55.426080 1438 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 00:34:55.426463 update_engine[1438]: I20260120 00:34:55.426430 1438 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 00:34:55.429370 update_engine[1438]: I20260120 00:34:55.428542 1438 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 00:34:55.430230 update_engine[1438]: I20260120 00:34:55.430160 1438 omaha_request_params.cc:62] Current group set to lts Jan 20 00:34:55.431814 update_engine[1438]: I20260120 00:34:55.431746 1438 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 00:34:55.431814 update_engine[1438]: I20260120 00:34:55.431787 1438 update_attempter.cc:643] Scheduling an action processor start. Jan 20 00:34:55.431814 update_engine[1438]: I20260120 00:34:55.431813 1438 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 00:34:55.431950 update_engine[1438]: I20260120 00:34:55.431854 1438 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 00:34:55.431950 update_engine[1438]: I20260120 00:34:55.431926 1438 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 00:34:55.431950 update_engine[1438]: I20260120 00:34:55.431939 1438 omaha_request_action.cc:272] Request: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: Jan 20 00:34:55.431950 update_engine[1438]: I20260120 00:34:55.431948 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:34:55.433250 locksmithd[1464]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 00:34:55.435537 update_engine[1438]: I20260120 00:34:55.435437 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:34:55.436117 update_engine[1438]: I20260120 00:34:55.436000 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:34:55.450964 update_engine[1438]: E20260120 00:34:55.450845 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:34:55.451081 update_engine[1438]: I20260120 00:34:55.451026 1438 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 00:34:56.527299 kubelet[2537]: E0120 00:34:56.527187 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:58.216459 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:52550.service - OpenSSH per-connection server daemon (10.0.0.1:52550). Jan 20 00:34:58.268090 sshd[3800]: Accepted publickey for core from 10.0.0.1 port 52550 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:34:58.270323 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:58.276368 systemd-logind[1436]: New session 15 of user core. Jan 20 00:34:58.284952 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:34:58.419167 sshd[3800]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:58.430258 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:52550.service: Deactivated successfully. Jan 20 00:34:58.432867 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:34:58.433902 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:34:58.435857 systemd-logind[1436]: Removed session 15. Jan 20 00:35:00.531087 kubelet[2537]: E0120 00:35:00.531012 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:03.433800 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:51004.service - OpenSSH per-connection server daemon (10.0.0.1:51004). Jan 20 00:35:03.471566 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 51004 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:03.473933 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:03.479728 systemd-logind[1436]: New session 16 of user core. Jan 20 00:35:03.492939 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:35:03.626041 sshd[3835]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:03.638896 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:51004.service: Deactivated successfully. Jan 20 00:35:03.641428 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:35:03.643543 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:35:03.652171 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:51014.service - OpenSSH per-connection server daemon (10.0.0.1:51014). Jan 20 00:35:03.653322 systemd-logind[1436]: Removed session 16. Jan 20 00:35:03.686487 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 51014 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:03.688514 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:03.694149 systemd-logind[1436]: New session 17 of user core. Jan 20 00:35:03.705896 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:35:03.960631 sshd[3850]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:03.970440 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:51014.service: Deactivated successfully. Jan 20 00:35:03.973391 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:35:03.976518 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:35:03.984102 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:51026.service - OpenSSH per-connection server daemon (10.0.0.1:51026). Jan 20 00:35:03.985161 systemd-logind[1436]: Removed session 17. Jan 20 00:35:04.018376 sshd[3863]: Accepted publickey for core from 10.0.0.1 port 51026 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:04.020752 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:04.026753 systemd-logind[1436]: New session 18 of user core. Jan 20 00:35:04.036856 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:35:04.639867 sshd[3863]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:04.650262 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:51026.service: Deactivated successfully. Jan 20 00:35:04.653933 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:35:04.656976 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:35:04.666119 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:51036.service - OpenSSH per-connection server daemon (10.0.0.1:51036). Jan 20 00:35:04.666623 systemd-logind[1436]: Removed session 18. Jan 20 00:35:04.706835 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 51036 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:04.708391 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:04.714190 systemd-logind[1436]: New session 19 of user core. Jan 20 00:35:04.723056 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:35:04.932362 sshd[3884]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:04.943877 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:51036.service: Deactivated successfully. Jan 20 00:35:04.946040 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:35:04.947644 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:35:04.956961 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:51052.service - OpenSSH per-connection server daemon (10.0.0.1:51052). Jan 20 00:35:04.957910 systemd-logind[1436]: Removed session 19. Jan 20 00:35:04.986775 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 51052 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:04.988523 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:04.994215 systemd-logind[1436]: New session 20 of user core. Jan 20 00:35:05.002839 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:35:05.120961 sshd[3896]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:05.125865 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:51052.service: Deactivated successfully. Jan 20 00:35:05.128484 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:35:05.129637 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:35:05.131425 systemd-logind[1436]: Removed session 20. Jan 20 00:35:05.421027 update_engine[1438]: I20260120 00:35:05.420909 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:35:05.421616 update_engine[1438]: I20260120 00:35:05.421267 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:35:05.421616 update_engine[1438]: I20260120 00:35:05.421537 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:35:05.439417 update_engine[1438]: E20260120 00:35:05.439342 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:35:05.439417 update_engine[1438]: I20260120 00:35:05.439412 1438 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 00:35:05.527961 kubelet[2537]: E0120 00:35:05.527891 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:08.529300 kubelet[2537]: E0120 00:35:08.529024 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:10.135602 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:51064.service - OpenSSH per-connection server daemon (10.0.0.1:51064). Jan 20 00:35:10.172864 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 51064 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:10.174883 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:10.180019 systemd-logind[1436]: New session 21 of user core. Jan 20 00:35:10.191961 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:35:10.314010 sshd[3930]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:10.318962 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:51064.service: Deactivated successfully. Jan 20 00:35:10.321488 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:35:10.322744 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:35:10.324841 systemd-logind[1436]: Removed session 21. Jan 20 00:35:15.343335 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:35156.service - OpenSSH per-connection server daemon (10.0.0.1:35156). Jan 20 00:35:15.381254 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 35156 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:15.383996 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:15.389773 systemd-logind[1436]: New session 22 of user core. Jan 20 00:35:15.407753 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:35:15.426824 update_engine[1438]: I20260120 00:35:15.424755 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:35:15.426824 update_engine[1438]: I20260120 00:35:15.426774 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:35:15.429789 update_engine[1438]: I20260120 00:35:15.427366 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:35:15.443371 update_engine[1438]: E20260120 00:35:15.443276 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:35:15.443460 update_engine[1438]: I20260120 00:35:15.443382 1438 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 00:35:15.578583 sshd[3969]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:15.584181 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:35156.service: Deactivated successfully. Jan 20 00:35:15.586956 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:35:15.587868 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:35:15.589473 systemd-logind[1436]: Removed session 22. Jan 20 00:35:18.528085 kubelet[2537]: E0120 00:35:18.528002 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:20.593923 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:35172.service - OpenSSH per-connection server daemon (10.0.0.1:35172). Jan 20 00:35:20.634364 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 35172 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:20.636231 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:20.641240 systemd-logind[1436]: New session 23 of user core. Jan 20 00:35:20.655840 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:35:20.767958 sshd[4003]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:20.772890 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:35172.service: Deactivated successfully. Jan 20 00:35:20.776147 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:35:20.777393 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:35:20.778983 systemd-logind[1436]: Removed session 23. Jan 20 00:35:22.531621 kubelet[2537]: E0120 00:35:22.531510 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:25.425564 update_engine[1438]: I20260120 00:35:25.425261 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:35:25.426238 update_engine[1438]: I20260120 00:35:25.426075 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:35:25.426527 update_engine[1438]: I20260120 00:35:25.426420 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:35:25.441307 update_engine[1438]: E20260120 00:35:25.441176 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:35:25.441307 update_engine[1438]: I20260120 00:35:25.441296 1438 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 00:35:25.441307 update_engine[1438]: I20260120 00:35:25.441309 1438 omaha_request_action.cc:617] Omaha request response: Jan 20 00:35:25.442069 update_engine[1438]: E20260120 00:35:25.441635 1438 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 00:35:25.442069 update_engine[1438]: I20260120 00:35:25.442027 1438 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 00:35:25.442069 update_engine[1438]: I20260120 00:35:25.442039 1438 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 00:35:25.442069 update_engine[1438]: I20260120 00:35:25.442049 1438 update_attempter.cc:306] Processing Done. Jan 20 00:35:25.442188 update_engine[1438]: E20260120 00:35:25.442121 1438 update_attempter.cc:619] Update failed. Jan 20 00:35:25.442188 update_engine[1438]: I20260120 00:35:25.442132 1438 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 00:35:25.442188 update_engine[1438]: I20260120 00:35:25.442140 1438 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 00:35:25.442188 update_engine[1438]: I20260120 00:35:25.442148 1438 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 00:35:25.442275 update_engine[1438]: I20260120 00:35:25.442234 1438 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 00:35:25.442275 update_engine[1438]: I20260120 00:35:25.442260 1438 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 00:35:25.442275 update_engine[1438]: I20260120 00:35:25.442268 1438 omaha_request_action.cc:272] Request: Jan 20 00:35:25.442275 update_engine[1438]: Jan 20 00:35:25.442275 update_engine[1438]: Jan 20 00:35:25.442275 update_engine[1438]: Jan 20 00:35:25.442275 update_engine[1438]: Jan 20 00:35:25.442275 update_engine[1438]: Jan 20 00:35:25.442275 update_engine[1438]: Jan 20 00:35:25.442435 update_engine[1438]: I20260120 00:35:25.442277 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:35:25.442563 update_engine[1438]: I20260120 00:35:25.442510 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:35:25.442930 update_engine[1438]: I20260120 00:35:25.442852 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:35:25.443180 locksmithd[1464]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 00:35:25.457345 update_engine[1438]: E20260120 00:35:25.457211 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457345 1438 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457368 1438 omaha_request_action.cc:617] Omaha request response: Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457384 1438 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457396 1438 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457407 1438 update_attempter.cc:306] Processing Done. Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457422 1438 update_attempter.cc:310] Error event sent. Jan 20 00:35:25.457498 update_engine[1438]: I20260120 00:35:25.457440 1438 update_check_scheduler.cc:74] Next update check in 48m31s Jan 20 00:35:25.458255 locksmithd[1464]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 00:35:25.782441 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:58374.service - OpenSSH per-connection server daemon (10.0.0.1:58374). Jan 20 00:35:25.836255 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 58374 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:35:25.855184 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:25.862507 systemd-logind[1436]: New session 24 of user core. Jan 20 00:35:25.872026 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:35:26.017517 sshd[4037]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:26.023351 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:58374.service: Deactivated successfully. Jan 20 00:35:26.028174 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:35:26.030318 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:35:26.033494 systemd-logind[1436]: Removed session 24.