Apr 13 22:53:07.695464 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 22:53:07.695595 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 22:53:07.695614 kernel: BIOS-provided physical RAM map: Apr 13 22:53:07.695623 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 22:53:07.695631 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 13 22:53:07.695639 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 13 22:53:07.695648 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 13 22:53:07.695655 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 13 22:53:07.695664 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 13 22:53:07.695672 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 13 22:53:07.695684 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 13 22:53:07.695692 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 13 22:53:07.695724 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 13 22:53:07.695733 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 13 22:53:07.695758 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 13 22:53:07.695767 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 13 22:53:07.695780 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 13 22:53:07.695788 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 13 22:53:07.695797 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 13 22:53:07.695806 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 22:53:07.695815 kernel: NX (Execute Disable) protection: active Apr 13 22:53:07.695823 kernel: APIC: Static calls initialized Apr 13 22:53:07.695832 kernel: efi: EFI v2.7 by EDK II Apr 13 22:53:07.695840 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 13 22:53:07.695848 kernel: SMBIOS 2.8 present. Apr 13 22:53:07.695857 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 13 22:53:07.695864 kernel: Hypervisor detected: KVM Apr 13 22:53:07.695876 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 22:53:07.695885 kernel: kvm-clock: using sched offset of 12693413593 cycles Apr 13 22:53:07.695895 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 22:53:07.695905 kernel: tsc: Detected 2793.438 MHz processor Apr 13 22:53:07.695914 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 22:53:07.695924 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 22:53:07.695933 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 13 22:53:07.695942 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 22:53:07.695950 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 22:53:07.695962 kernel: Using GB pages for direct mapping Apr 13 22:53:07.695971 kernel: Secure boot disabled Apr 13 22:53:07.695980 kernel: ACPI: Early table checksum verification disabled Apr 13 22:53:07.695989 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 13 22:53:07.696004 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 22:53:07.696014 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 22:53:07.696024 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 22:53:07.696037 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 13 22:53:07.696087 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 22:53:07.696100 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 22:53:07.696108 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 22:53:07.696150 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 22:53:07.696160 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 22:53:07.696170 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 13 22:53:07.696184 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 13 22:53:07.696193 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 13 22:53:07.696203 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 13 22:53:07.696213 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 13 22:53:07.696222 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 13 22:53:07.696231 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 13 22:53:07.696240 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 13 22:53:07.696248 kernel: No NUMA configuration found Apr 13 22:53:07.696274 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 13 22:53:07.696286 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 13 22:53:07.696295 kernel: Zone ranges: Apr 13 22:53:07.696303 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 22:53:07.696312 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 13 22:53:07.696320 kernel: Normal empty Apr 13 22:53:07.696328 kernel: Movable zone start for each node Apr 13 22:53:07.696336 kernel: Early memory node ranges Apr 13 22:53:07.696345 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 22:53:07.696354 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 13 22:53:07.696363 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 13 22:53:07.696374 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 13 22:53:07.696383 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 13 22:53:07.696392 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 13 22:53:07.696419 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 13 22:53:07.696430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 22:53:07.696439 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 22:53:07.696448 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 13 22:53:07.696458 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 22:53:07.696467 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 13 22:53:07.696480 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 22:53:07.696489 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 13 22:53:07.696498 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 22:53:07.696506 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 22:53:07.696516 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 22:53:07.696526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 22:53:07.696535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 22:53:07.696543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 22:53:07.696552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 22:53:07.696564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 22:53:07.696575 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 22:53:07.696584 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 22:53:07.696593 kernel: TSC deadline timer available Apr 13 22:53:07.696602 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 13 22:53:07.696611 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 22:53:07.696619 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 22:53:07.696627 kernel: kvm-guest: setup PV sched yield Apr 13 22:53:07.696635 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 22:53:07.696645 kernel: Booting paravirtualized kernel on KVM Apr 13 22:53:07.696654 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 22:53:07.696662 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 13 22:53:07.696670 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 13 22:53:07.696679 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 13 22:53:07.696687 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 13 22:53:07.696695 kernel: kvm-guest: PV spinlocks enabled Apr 13 22:53:07.696703 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 22:53:07.696713 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 22:53:07.696741 kernel: random: crng init done Apr 13 22:53:07.696749 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 22:53:07.696758 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 22:53:07.696767 kernel: Fallback order for Node 0: 0 Apr 13 22:53:07.696776 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 13 22:53:07.696786 kernel: Policy zone: DMA32 Apr 13 22:53:07.696796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 22:53:07.696805 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172120K reserved, 0K cma-reserved) Apr 13 22:53:07.696819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 13 22:53:07.696829 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 22:53:07.696838 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 22:53:07.696849 kernel: Dynamic Preempt: voluntary Apr 13 22:53:07.696860 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 22:53:07.696879 kernel: rcu: RCU event tracing is enabled. Apr 13 22:53:07.696894 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 13 22:53:07.696906 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 22:53:07.696916 kernel: Rude variant of Tasks RCU enabled. Apr 13 22:53:07.696927 kernel: Tracing variant of Tasks RCU enabled. Apr 13 22:53:07.696938 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 22:53:07.696950 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 13 22:53:07.696963 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 13 22:53:07.696974 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 22:53:07.696983 kernel: Console: colour dummy device 80x25 Apr 13 22:53:07.696992 kernel: printk: console [ttyS0] enabled Apr 13 22:53:07.697021 kernel: ACPI: Core revision 20230628 Apr 13 22:53:07.697035 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 22:53:07.697045 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 22:53:07.697055 kernel: x2apic enabled Apr 13 22:53:07.697066 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 22:53:07.697100 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 22:53:07.697111 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 22:53:07.697155 kernel: kvm-guest: setup PV IPIs Apr 13 22:53:07.697166 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 22:53:07.697177 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 22:53:07.697191 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 13 22:53:07.697201 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 22:53:07.697210 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 22:53:07.697220 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 22:53:07.697230 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 22:53:07.697285 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 22:53:07.697296 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 22:53:07.697307 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 22:53:07.697322 kernel: RETBleed: Vulnerable Apr 13 22:53:07.697333 kernel: Speculative Store Bypass: Vulnerable Apr 13 22:53:07.697344 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 22:53:07.697355 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 22:53:07.697383 kernel: active return thunk: its_return_thunk Apr 13 22:53:07.697394 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 22:53:07.697405 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 22:53:07.697416 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 22:53:07.697427 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 22:53:07.697440 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 22:53:07.697451 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 22:53:07.697462 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 22:53:07.697473 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 22:53:07.697483 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 22:53:07.697494 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 22:53:07.697506 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 22:53:07.697517 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 13 22:53:07.697528 kernel: Freeing SMP alternatives memory: 32K Apr 13 22:53:07.697540 kernel: pid_max: default: 32768 minimum: 301 Apr 13 22:53:07.697551 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 22:53:07.697561 kernel: landlock: Up and running. Apr 13 22:53:07.697572 kernel: SELinux: Initializing. Apr 13 22:53:07.697583 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 22:53:07.697594 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 22:53:07.697605 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 13 22:53:07.697616 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 22:53:07.697628 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 22:53:07.697641 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 22:53:07.697652 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 13 22:53:07.697663 kernel: signal: max sigframe size: 3632 Apr 13 22:53:07.697673 kernel: rcu: Hierarchical SRCU implementation. Apr 13 22:53:07.697685 kernel: rcu: Max phase no-delay instances is 400. Apr 13 22:53:07.697695 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 22:53:07.697707 kernel: smp: Bringing up secondary CPUs ... Apr 13 22:53:07.697718 kernel: smpboot: x86: Booting SMP configuration: Apr 13 22:53:07.697728 kernel: .... node #0, CPUs: #1 #2 #3 Apr 13 22:53:07.697741 kernel: smp: Brought up 1 node, 4 CPUs Apr 13 22:53:07.697752 kernel: smpboot: Max logical packages: 1 Apr 13 22:53:07.697763 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 13 22:53:07.697774 kernel: devtmpfs: initialized Apr 13 22:53:07.697785 kernel: x86/mm: Memory block size: 128MB Apr 13 22:53:07.697797 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 13 22:53:07.697808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 13 22:53:07.697819 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 13 22:53:07.697830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 13 22:53:07.697844 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 13 22:53:07.697856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 22:53:07.697867 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 13 22:53:07.697878 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 22:53:07.697889 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 22:53:07.697900 kernel: audit: initializing netlink subsys (disabled) Apr 13 22:53:07.701040 kernel: audit: type=2000 audit(1776120777.817:1): state=initialized audit_enabled=0 res=1 Apr 13 22:53:07.701104 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 22:53:07.701736 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 22:53:07.701844 kernel: cpuidle: using governor menu Apr 13 22:53:07.701855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 22:53:07.701866 kernel: dca service started, version 1.12.1 Apr 13 22:53:07.701877 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 22:53:07.701887 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 22:53:07.701899 kernel: PCI: Using configuration type 1 for base access Apr 13 22:53:07.701909 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 22:53:07.701920 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 22:53:07.701931 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 22:53:07.701945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 22:53:07.701956 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 22:53:07.701966 kernel: ACPI: Added _OSI(Module Device) Apr 13 22:53:07.701977 kernel: ACPI: Added _OSI(Processor Device) Apr 13 22:53:07.701987 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 22:53:07.701997 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 22:53:07.702008 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 22:53:07.702018 kernel: ACPI: Interpreter enabled Apr 13 22:53:07.702029 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 22:53:07.702042 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 22:53:07.702053 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 22:53:07.702063 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 22:53:07.702098 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 22:53:07.702109 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 22:53:07.703619 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 22:53:07.703790 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 22:53:07.703913 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 22:53:07.703929 kernel: PCI host bridge to bus 0000:00 Apr 13 22:53:07.704200 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 22:53:07.704311 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 22:53:07.704404 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 22:53:07.704499 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 13 22:53:07.704596 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 22:53:07.704784 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 13 22:53:07.704891 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 22:53:07.705801 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 22:53:07.706010 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 22:53:07.706811 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 13 22:53:07.706933 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 13 22:53:07.707044 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 22:53:07.707829 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 22:53:07.712743 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 22:53:07.713176 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 22:53:07.713320 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 13 22:53:07.713457 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 13 22:53:07.713563 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 13 22:53:07.713740 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 13 22:53:07.713867 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 13 22:53:07.713978 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 13 22:53:07.714162 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 13 22:53:07.718368 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 22:53:07.718515 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 13 22:53:07.718617 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 13 22:53:07.718725 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 13 22:53:07.718821 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 13 22:53:07.719007 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 22:53:07.723505 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 22:53:07.723656 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 22:53:07.723724 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 13 22:53:07.723785 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 13 22:53:07.723929 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 22:53:07.724004 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 13 22:53:07.724012 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 22:53:07.724018 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 22:53:07.724024 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 22:53:07.724030 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 22:53:07.724036 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 22:53:07.724042 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 22:53:07.724055 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 22:53:07.724060 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 22:53:07.724066 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 22:53:07.727843 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 22:53:07.728026 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 22:53:07.728044 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 22:53:07.728054 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 22:53:07.728064 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 22:53:07.728109 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 22:53:07.728321 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 22:53:07.728370 kernel: iommu: Default domain type: Translated Apr 13 22:53:07.728380 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 22:53:07.728392 kernel: efivars: Registered efivars operations Apr 13 22:53:07.728402 kernel: PCI: Using ACPI for IRQ routing Apr 13 22:53:07.728555 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 22:53:07.728568 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 13 22:53:07.728579 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 13 22:53:07.728588 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 13 22:53:07.728724 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 13 22:53:07.729800 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 22:53:07.730555 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 22:53:07.731186 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 22:53:07.731349 kernel: vgaarb: loaded Apr 13 22:53:07.731363 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 22:53:07.731375 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 22:53:07.731386 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 22:53:07.731397 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 22:53:07.731415 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 22:53:07.731426 kernel: pnp: PnP ACPI init Apr 13 22:53:07.731820 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 22:53:07.731845 kernel: pnp: PnP ACPI: found 6 devices Apr 13 22:53:07.731857 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 22:53:07.731867 kernel: NET: Registered PF_INET protocol family Apr 13 22:53:07.731879 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 22:53:07.731889 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 22:53:07.731899 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 22:53:07.731916 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 22:53:07.731926 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 22:53:07.731936 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 22:53:07.731946 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 22:53:07.731957 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 22:53:07.731968 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 22:53:07.731979 kernel: NET: Registered PF_XDP protocol family Apr 13 22:53:07.732246 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 13 22:53:07.732376 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 13 22:53:07.732466 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 22:53:07.732550 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 22:53:07.732642 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 22:53:07.732737 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 13 22:53:07.732897 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 22:53:07.732996 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 13 22:53:07.733012 kernel: PCI: CLS 0 bytes, default 64 Apr 13 22:53:07.733029 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 22:53:07.733040 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 22:53:07.733050 kernel: Initialise system trusted keyrings Apr 13 22:53:07.733060 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 22:53:07.733102 kernel: Key type asymmetric registered Apr 13 22:53:07.733114 kernel: Asymmetric key parser 'x509' registered Apr 13 22:53:07.733165 kernel: hrtimer: interrupt took 12502588 ns Apr 13 22:53:07.733176 kernel: clocksource: timekeeping watchdog on CPU3: kvm-clock wd-wd read-back delay of 107183ns Apr 13 22:53:07.733187 kernel: clocksource: wd-tsc-wd read-back delay of 129471ns, clock-skew test skipped! Apr 13 22:53:07.733203 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 22:53:07.733213 kernel: io scheduler mq-deadline registered Apr 13 22:53:07.733222 kernel: io scheduler kyber registered Apr 13 22:53:07.733232 kernel: io scheduler bfq registered Apr 13 22:53:07.733241 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 22:53:07.733252 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 22:53:07.733263 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 22:53:07.733274 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 22:53:07.733284 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 22:53:07.733299 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 22:53:07.733310 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 22:53:07.733321 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 22:53:07.733331 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 22:53:07.733700 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 13 22:53:07.733810 kernel: rtc_cmos 00:04: registered as rtc0 Apr 13 22:53:07.733907 kernel: rtc_cmos 00:04: setting system clock to 2026-04-13T22:53:04 UTC (1776120784) Apr 13 22:53:07.734000 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 13 22:53:07.734063 kernel: intel_pstate: CPU model not supported Apr 13 22:53:07.734231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 22:53:07.734242 kernel: efifb: probing for efifb Apr 13 22:53:07.734251 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 13 22:53:07.734260 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 13 22:53:07.734269 kernel: efifb: scrolling: redraw Apr 13 22:53:07.734296 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 13 22:53:07.734309 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 22:53:07.734320 kernel: fb0: EFI VGA frame buffer device Apr 13 22:53:07.734330 kernel: pstore: Using crash dump compression: deflate Apr 13 22:53:07.734340 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 22:53:07.734350 kernel: NET: Registered PF_INET6 protocol family Apr 13 22:53:07.734360 kernel: Segment Routing with IPv6 Apr 13 22:53:07.734371 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 22:53:07.734381 kernel: NET: Registered PF_PACKET protocol family Apr 13 22:53:07.734391 kernel: Key type dns_resolver registered Apr 13 22:53:07.734401 kernel: IPI shorthand broadcast: enabled Apr 13 22:53:07.734411 kernel: sched_clock: Marking stable (6878020059, 651022797)->(8408589135, -879546279) Apr 13 22:53:07.734425 kernel: registered taskstats version 1 Apr 13 22:53:07.734436 kernel: Loading compiled-in X.509 certificates Apr 13 22:53:07.734447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 22:53:07.734457 kernel: Key type .fscrypt registered Apr 13 22:53:07.734466 kernel: Key type fscrypt-provisioning registered Apr 13 22:53:07.734476 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 22:53:07.734486 kernel: ima: Allocated hash algorithm: sha1 Apr 13 22:53:07.734496 kernel: ima: No architecture policies found Apr 13 22:53:07.734507 kernel: clk: Disabling unused clocks Apr 13 22:53:07.734521 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 22:53:07.734531 kernel: Write protecting the kernel read-only data: 36864k Apr 13 22:53:07.734542 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 22:53:07.734554 kernel: Run /init as init process Apr 13 22:53:07.734568 kernel: with arguments: Apr 13 22:53:07.734579 kernel: /init Apr 13 22:53:07.734592 kernel: with environment: Apr 13 22:53:07.734602 kernel: HOME=/ Apr 13 22:53:07.734612 kernel: TERM=linux Apr 13 22:53:07.734652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 22:53:07.734668 systemd[1]: Detected virtualization kvm. Apr 13 22:53:07.734680 systemd[1]: Detected architecture x86-64. Apr 13 22:53:07.734692 systemd[1]: Running in initrd. Apr 13 22:53:07.734706 systemd[1]: No hostname configured, using default hostname. Apr 13 22:53:07.734716 systemd[1]: Hostname set to . Apr 13 22:53:07.734728 systemd[1]: Initializing machine ID from VM UUID. Apr 13 22:53:07.734740 systemd[1]: Queued start job for default target initrd.target. Apr 13 22:53:07.734752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 22:53:07.734764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 22:53:07.734775 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 22:53:07.734786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 22:53:07.734800 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 22:53:07.734811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 22:53:07.734824 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 22:53:07.734836 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 22:53:07.734847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 22:53:07.734858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 22:53:07.734873 systemd[1]: Reached target paths.target - Path Units. Apr 13 22:53:07.734885 systemd[1]: Reached target slices.target - Slice Units. Apr 13 22:53:07.734896 systemd[1]: Reached target swap.target - Swaps. Apr 13 22:53:07.734908 systemd[1]: Reached target timers.target - Timer Units. Apr 13 22:53:07.734918 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 22:53:07.734930 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 22:53:07.734940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 22:53:07.734950 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 22:53:07.734960 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 22:53:07.734973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 22:53:07.734984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 22:53:07.734994 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 22:53:07.735004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 22:53:07.735015 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 22:53:07.735027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 22:53:07.735037 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 22:53:07.735048 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 22:53:07.735059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 22:53:07.735104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 22:53:07.735190 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 22:53:07.735244 systemd-journald[194]: Collecting audit messages is disabled. Apr 13 22:53:07.735275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 22:53:07.735293 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 22:53:07.735304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 22:53:07.735317 systemd-journald[194]: Journal started Apr 13 22:53:07.735347 systemd-journald[194]: Runtime Journal (/run/log/journal/57b66e2c9d544325852e81f66eac64ec) is 6.0M, max 48.3M, 42.2M free. Apr 13 22:53:07.755199 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 22:53:07.778597 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 22:53:07.802624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 22:53:07.803292 systemd-modules-load[195]: Inserted module 'overlay' Apr 13 22:53:07.825851 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 22:53:07.884449 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 22:53:07.919575 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 22:53:07.920001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 22:53:07.954756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 22:53:07.974834 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 22:53:07.987191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 22:53:08.014931 dracut-cmdline[223]: dracut-dracut-053 Apr 13 22:53:08.024176 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 22:53:08.105240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 22:53:08.129574 kernel: Bridge firewalling registered Apr 13 22:53:08.144501 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 13 22:53:08.161535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 22:53:08.212673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 22:53:08.260007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 22:53:08.307449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 22:53:08.524568 systemd-resolved[286]: Positive Trust Anchors: Apr 13 22:53:08.524603 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 22:53:08.524639 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 22:53:08.560972 systemd-resolved[286]: Defaulting to hostname 'linux'. Apr 13 22:53:08.590456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 22:53:08.613291 kernel: SCSI subsystem initialized Apr 13 22:53:08.601505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 22:53:08.659056 kernel: Loading iSCSI transport class v2.0-870. Apr 13 22:53:08.707269 kernel: iscsi: registered transport (tcp) Apr 13 22:53:08.796859 kernel: iscsi: registered transport (qla4xxx) Apr 13 22:53:08.797260 kernel: QLogic iSCSI HBA Driver Apr 13 22:53:09.315951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 22:53:09.350816 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 22:53:09.537721 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 22:53:09.539596 kernel: device-mapper: uevent: version 1.0.3 Apr 13 22:53:09.544401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 22:53:09.784058 kernel: raid6: avx512x4 gen() 30295 MB/s Apr 13 22:53:09.805647 kernel: raid6: avx512x2 gen() 23922 MB/s Apr 13 22:53:09.828098 kernel: raid6: avx512x1 gen() 16278 MB/s Apr 13 22:53:09.850592 kernel: raid6: avx2x4 gen() 14901 MB/s Apr 13 22:53:09.870147 kernel: raid6: avx2x2 gen() 18332 MB/s Apr 13 22:53:09.894017 kernel: raid6: avx2x1 gen() 10530 MB/s Apr 13 22:53:09.894530 kernel: raid6: using algorithm avx512x4 gen() 30295 MB/s Apr 13 22:53:09.922932 kernel: raid6: .... xor() 4523 MB/s, rmw enabled Apr 13 22:53:09.923805 kernel: raid6: using avx512x2 recovery algorithm Apr 13 22:53:10.046316 kernel: xor: automatically using best checksumming function avx Apr 13 22:53:11.094489 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 22:53:11.412632 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 22:53:11.593880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 22:53:13.085078 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 13 22:53:13.496875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 22:53:13.543153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 22:53:14.314326 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Apr 13 22:53:17.541045 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 22:53:17.658551 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 22:53:20.121413 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 22:53:20.211596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 22:53:20.386577 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 22:53:20.398053 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 22:53:20.408665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 22:53:20.413644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 22:53:20.505721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 22:53:20.526741 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 22:53:20.555949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 22:53:20.558663 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 22:53:20.601879 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 22:53:20.675900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 22:53:20.722540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 22:53:20.739715 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 22:53:20.862956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 22:53:20.900026 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 22:53:21.219951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 22:53:21.282804 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 13 22:53:21.286675 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 13 22:53:21.295491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 22:53:21.366649 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 22:53:21.366715 kernel: GPT:9289727 != 19775487 Apr 13 22:53:21.366728 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 22:53:21.366741 kernel: GPT:9289727 != 19775487 Apr 13 22:53:21.366753 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 22:53:21.366766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 22:53:21.575579 kernel: libata version 3.00 loaded. Apr 13 22:53:21.660589 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 22:53:21.778329 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 22:53:21.807524 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Apr 13 22:53:21.828313 kernel: AES CTR mode by8 optimization enabled Apr 13 22:53:21.835760 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 22:53:21.836487 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 22:53:21.853722 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (464) Apr 13 22:53:21.853803 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 22:53:21.893063 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 22:53:21.903318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 22:53:21.932963 kernel: scsi host0: ahci Apr 13 22:53:21.939304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 22:53:21.967409 kernel: scsi host1: ahci Apr 13 22:53:21.975518 kernel: scsi host2: ahci Apr 13 22:53:21.975645 kernel: scsi host3: ahci Apr 13 22:53:21.975763 kernel: scsi host4: ahci Apr 13 22:53:21.975877 kernel: scsi host5: ahci Apr 13 22:53:21.975982 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 13 22:53:21.978638 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 13 22:53:21.984096 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 13 22:53:21.986242 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 13 22:53:21.986049 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 22:53:22.006709 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 13 22:53:22.006742 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 13 22:53:22.036753 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 22:53:22.049521 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 22:53:22.133199 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 22:53:22.320412 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 22:53:22.320924 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 22:53:22.328077 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 22:53:22.336798 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 22:53:22.336981 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 22:53:22.340883 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 22:53:22.396056 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 22:53:22.414530 kernel: ata3.00: applying bridge limits Apr 13 22:53:22.414594 kernel: ata3.00: configured for UDMA/100 Apr 13 22:53:22.414608 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 22:53:22.487915 disk-uuid[562]: Primary Header is updated. Apr 13 22:53:22.487915 disk-uuid[562]: Secondary Entries is updated. Apr 13 22:53:22.487915 disk-uuid[562]: Secondary Header is updated. Apr 13 22:53:22.534894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 22:53:22.684342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 22:53:22.984890 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 22:53:22.992977 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 22:53:23.016082 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 13 22:53:23.736710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 22:53:23.740846 disk-uuid[565]: The operation has completed successfully. Apr 13 22:53:28.082614 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 22:53:28.090963 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 22:53:28.333726 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 22:53:28.723981 sh[592]: Success Apr 13 22:53:28.953669 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 22:53:29.447533 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 22:53:29.546556 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 22:53:29.703409 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 22:53:29.795285 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 22:53:29.795617 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 22:53:29.804452 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 22:53:29.809457 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 22:53:29.819579 kernel: BTRFS info (device dm-0): using free space tree Apr 13 22:53:29.977652 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 22:53:30.008502 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 22:53:30.076478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 22:53:30.132499 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 22:53:30.328433 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 22:53:30.328996 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 22:53:30.336587 kernel: BTRFS info (device vda6): using free space tree Apr 13 22:53:30.420925 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 22:53:30.576484 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 22:53:30.584691 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 22:53:30.807972 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 22:53:30.904475 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 22:53:32.180808 ignition[690]: Ignition 2.19.0 Apr 13 22:53:32.181067 ignition[690]: Stage: fetch-offline Apr 13 22:53:32.183606 ignition[690]: no configs at "/usr/lib/ignition/base.d" Apr 13 22:53:32.183616 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 22:53:32.183717 ignition[690]: parsed url from cmdline: "" Apr 13 22:53:32.183719 ignition[690]: no config URL provided Apr 13 22:53:32.183723 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 22:53:32.183729 ignition[690]: no config at "/usr/lib/ignition/user.ign" Apr 13 22:53:32.183843 ignition[690]: op(1): [started] loading QEMU firmware config module Apr 13 22:53:32.183849 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 13 22:53:32.363077 ignition[690]: op(1): [finished] loading QEMU firmware config module Apr 13 22:53:33.090969 ignition[690]: parsing config with SHA512: 0e78a754fb726fc120c7ec91e29ad374c44ec08817a0c74d20458f5757bf1fe27a742cc7da7f443376f50b8ecbfe7a68837bbe1f7f19d8010f813905a273a346 Apr 13 22:53:33.126387 unknown[690]: fetched base config from "system" Apr 13 22:53:33.130869 unknown[690]: fetched user config from "qemu" Apr 13 22:53:33.139104 ignition[690]: fetch-offline: fetch-offline passed Apr 13 22:53:33.143445 ignition[690]: Ignition finished successfully Apr 13 22:53:33.181197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 22:53:34.419740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 22:53:34.819036 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 22:53:38.152815 systemd-networkd[782]: lo: Link UP Apr 13 22:53:38.159180 systemd-networkd[782]: lo: Gained carrier Apr 13 22:53:38.188578 systemd-networkd[782]: Enumeration completed Apr 13 22:53:38.190073 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 22:53:38.190077 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 22:53:38.193636 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 22:53:38.210953 systemd-networkd[782]: eth0: Link UP Apr 13 22:53:38.210959 systemd-networkd[782]: eth0: Gained carrier Apr 13 22:53:38.210974 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 22:53:38.237325 systemd[1]: Reached target network.target - Network. Apr 13 22:53:38.313639 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 13 22:53:38.453851 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 22:53:38.484308 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 22:53:38.773186 ignition[784]: Ignition 2.19.0 Apr 13 22:53:38.796924 ignition[784]: Stage: kargs Apr 13 22:53:38.805653 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 13 22:53:38.805666 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 22:53:38.817418 ignition[784]: kargs: kargs passed Apr 13 22:53:38.817654 ignition[784]: Ignition finished successfully Apr 13 22:53:38.855198 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 22:53:38.941772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 22:53:39.338457 ignition[793]: Ignition 2.19.0 Apr 13 22:53:39.341578 ignition[793]: Stage: disks Apr 13 22:53:39.341914 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 13 22:53:39.341928 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 22:53:39.363301 ignition[793]: disks: disks passed Apr 13 22:53:39.399241 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 22:53:39.363585 ignition[793]: Ignition finished successfully Apr 13 22:53:39.417461 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 22:53:39.464684 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 22:53:39.464900 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 22:53:39.464957 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 22:53:39.464984 systemd[1]: Reached target basic.target - Basic System. Apr 13 22:53:39.658902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 22:53:39.995426 systemd-networkd[782]: eth0: Gained IPv6LL Apr 13 22:53:40.122076 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 22:53:40.229051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 22:53:40.455535 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 22:53:41.468511 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 22:53:41.489447 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 22:53:41.527807 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 22:53:41.554936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 22:53:41.660763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 22:53:41.728408 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 22:53:41.792719 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Apr 13 22:53:41.792826 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 22:53:41.792844 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 22:53:41.792858 kernel: BTRFS info (device vda6): using free space tree Apr 13 22:53:41.728490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 22:53:41.728527 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 22:53:41.769616 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 22:53:41.844081 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 22:53:41.848621 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 22:53:41.865447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 22:53:42.167511 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 22:53:42.297077 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 13 22:53:42.410944 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 22:53:42.438021 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 22:53:44.504288 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 22:53:44.572250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 22:53:44.630431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 22:53:44.741931 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 22:53:44.762500 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 22:53:45.052736 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 22:53:45.110331 ignition[927]: INFO : Ignition 2.19.0 Apr 13 22:53:45.110331 ignition[927]: INFO : Stage: mount Apr 13 22:53:45.128561 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 22:53:45.128561 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 22:53:45.128561 ignition[927]: INFO : mount: mount passed Apr 13 22:53:45.128561 ignition[927]: INFO : Ignition finished successfully Apr 13 22:53:45.136328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 22:53:45.163195 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 22:53:45.311064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 22:53:45.485070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Apr 13 22:53:45.501489 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 22:53:45.509432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 22:53:45.514422 kernel: BTRFS info (device vda6): using free space tree Apr 13 22:53:45.561638 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 22:53:45.604367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 22:53:50.965556 ignition[958]: INFO : Ignition 2.19.0 Apr 13 22:53:51.004595 ignition[958]: INFO : Stage: files Apr 13 22:53:51.231058 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 22:53:51.266963 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 22:53:51.522546 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 13 22:53:51.887356 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 22:53:51.903716 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 22:53:52.670580 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 22:53:52.732657 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 22:53:53.253740 unknown[958]: wrote ssh authorized keys file for user: core Apr 13 22:53:53.282984 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 22:53:53.707010 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 22:53:53.872977 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 22:54:01.866806 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz": EOF Apr 13 22:54:02.090631 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #2 Apr 13 22:54:03.020323 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 22:54:04.727006 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 22:54:04.727006 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 22:54:04.727006 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 22:54:04.767551 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 22:54:05.582061 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 22:54:12.318062 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 22:54:12.318062 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 22:54:12.382994 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 22:54:12.586418 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 22:54:12.586418 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 22:54:12.586418 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 22:54:12.695260 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 22:54:12.781501 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 22:54:12.818675 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 22:54:12.818675 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 13 22:54:17.911065 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 22:54:19.369496 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 22:54:19.402184 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 13 22:54:19.402184 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 13 22:54:19.402184 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 22:54:19.402184 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 22:54:19.578183 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 22:54:19.578183 ignition[958]: INFO : files: files passed Apr 13 22:54:19.578183 ignition[958]: INFO : Ignition finished successfully Apr 13 22:54:19.597027 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 22:54:19.725654 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 22:54:19.813427 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 22:54:20.019868 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 22:54:20.020039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 22:54:20.384522 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 13 22:54:20.455261 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 22:54:20.520683 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 22:54:20.520683 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 22:54:20.771510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 22:54:20.916639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 22:54:21.045867 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 22:54:33.030024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 22:54:33.031873 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 22:54:33.086707 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 22:54:33.111611 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 22:54:33.159518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 22:54:33.235173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 22:54:34.260318 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 22:54:34.590995 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 22:54:35.396903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 22:54:35.444351 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 22:54:35.453066 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 22:54:35.484942 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 22:54:35.513339 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 22:54:35.530238 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 22:54:35.535047 systemd[1]: Stopped target basic.target - Basic System. Apr 13 22:54:35.566361 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 22:54:35.633578 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 22:54:35.712222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 22:54:35.728572 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 22:54:35.735059 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 22:54:35.763272 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 22:54:35.772308 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 22:54:35.804064 systemd[1]: Stopped target swap.target - Swaps. Apr 13 22:54:35.824369 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 22:54:35.831034 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 22:54:35.927982 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 22:54:35.959346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 22:54:35.980954 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 22:54:35.984826 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 22:54:35.988836 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 22:54:35.989449 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 22:54:36.040759 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 22:54:36.061836 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 22:54:36.137933 systemd[1]: Stopped target paths.target - Path Units. Apr 13 22:54:36.174655 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 22:54:36.198110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 22:54:36.251357 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 22:54:36.285396 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 22:54:36.327381 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 22:54:36.394092 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 22:54:36.441964 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 22:54:36.453033 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 22:54:36.471880 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 22:54:36.542357 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 22:54:36.704079 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 22:54:36.751612 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 22:54:36.889441 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 22:54:36.956445 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 22:54:36.973943 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 22:54:37.119103 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 22:54:37.146027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 22:54:37.162754 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 22:54:37.207997 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 22:54:37.208302 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 22:54:37.382717 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 22:54:37.393881 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 22:54:37.568587 ignition[1013]: INFO : Ignition 2.19.0 Apr 13 22:54:37.578689 ignition[1013]: INFO : Stage: umount Apr 13 22:54:37.595248 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 22:54:37.595248 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 22:54:37.630525 ignition[1013]: INFO : umount: umount passed Apr 13 22:54:37.630525 ignition[1013]: INFO : Ignition finished successfully Apr 13 22:54:37.651049 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 22:54:37.652294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 22:54:37.689638 systemd[1]: Stopped target network.target - Network. Apr 13 22:54:37.705642 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 22:54:37.713065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 22:54:37.748778 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 22:54:37.748918 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 22:54:37.775669 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 22:54:37.775781 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 22:54:37.793952 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 22:54:37.794415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 22:54:37.891805 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 22:54:37.905854 systemd-networkd[782]: eth0: DHCPv6 lease lost Apr 13 22:54:37.914792 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 22:54:37.941427 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 22:54:37.947028 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 22:54:37.947210 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 22:54:37.970768 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 22:54:37.971755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 22:54:38.111729 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 22:54:38.111985 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 22:54:38.363107 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 22:54:38.394037 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 22:54:38.411642 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 22:54:38.415036 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 22:54:38.488581 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 22:54:38.499455 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 22:54:38.499976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 22:54:38.537819 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 22:54:38.537938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 22:54:38.556829 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 22:54:38.557078 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 22:54:38.592728 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 22:54:38.592958 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 22:54:38.630026 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 22:54:38.673572 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 22:54:38.673787 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 22:54:38.723983 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 22:54:38.724560 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 22:54:38.730569 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 22:54:38.730715 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 22:54:38.743916 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 22:54:38.744051 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 22:54:38.790387 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 22:54:38.790928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 22:54:38.824350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 22:54:38.824647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 22:54:38.900538 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 22:54:38.932661 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 22:54:38.950537 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 22:54:38.973641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 22:54:38.990630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 22:54:39.044195 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 22:54:39.133895 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 22:54:39.330090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 22:54:39.331252 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 22:54:39.405947 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 22:54:39.635366 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 22:54:40.395223 systemd[1]: Switching root. Apr 13 22:54:40.706666 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 13 22:54:40.706905 systemd-journald[194]: Journal stopped Apr 13 22:57:05.210007 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 22:57:05.211605 kernel: SELinux: policy capability open_perms=1 Apr 13 22:57:05.211664 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 22:57:05.211704 kernel: SELinux: policy capability always_check_network=0 Apr 13 22:57:05.213797 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 22:57:05.213951 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 22:57:05.213979 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 22:57:05.213992 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 22:57:05.214006 kernel: audit: type=1403 audit(1776120881.917:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 22:57:05.214059 systemd[1]: Successfully loaded SELinux policy in 431.053ms. Apr 13 22:57:05.214181 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 156.444ms. Apr 13 22:57:05.214198 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 22:57:05.214238 systemd[1]: Detected virtualization kvm. Apr 13 22:57:05.216622 systemd[1]: Detected architecture x86-64. Apr 13 22:57:05.216728 systemd[1]: Detected first boot. Apr 13 22:57:05.216742 systemd[1]: Initializing machine ID from VM UUID. Apr 13 22:57:05.216756 zram_generator::config[1058]: No configuration found. Apr 13 22:57:05.216798 systemd[1]: Populated /etc with preset unit settings. Apr 13 22:57:05.216811 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 22:57:05.216822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 22:57:05.216896 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 22:57:05.216936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 22:57:05.216949 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 22:57:05.216962 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 22:57:05.216978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 22:57:05.216991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 22:57:05.217003 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 22:57:05.217016 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 22:57:05.217028 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 22:57:05.217046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 22:57:05.217059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 22:57:05.217071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 22:57:05.217098 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 22:57:05.217111 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 22:57:05.218514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 22:57:05.218608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 22:57:05.218622 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 22:57:05.218635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 22:57:05.226720 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 22:57:05.228313 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 22:57:05.228337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 22:57:05.228352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 22:57:05.228446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 22:57:05.228461 systemd[1]: Reached target slices.target - Slice Units. Apr 13 22:57:05.228486 systemd[1]: Reached target swap.target - Swaps. Apr 13 22:57:05.228501 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 22:57:05.228532 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 22:57:05.228557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 22:57:05.228571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 22:57:05.228596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 22:57:05.228609 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 22:57:05.228633 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 22:57:05.228647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 22:57:05.228679 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 22:57:05.228694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 22:57:05.228722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 22:57:05.228735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 22:57:05.228752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 22:57:05.228776 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 22:57:05.228791 systemd[1]: Reached target machines.target - Containers. Apr 13 22:57:05.228804 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 22:57:05.228817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 22:57:05.228829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 22:57:05.228845 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 22:57:05.232808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 22:57:05.232927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 22:57:05.232944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 22:57:05.235041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 22:57:05.236598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 22:57:05.236646 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 22:57:05.236659 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 22:57:05.236737 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 22:57:05.236754 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 22:57:05.236768 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 22:57:05.236801 kernel: ACPI: bus type drm_connector registered Apr 13 22:57:05.236818 kernel: fuse: init (API version 7.39) Apr 13 22:57:05.236845 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 22:57:05.236860 kernel: loop: module loaded Apr 13 22:57:05.236873 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 22:57:05.236900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 22:57:05.239926 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 22:57:05.242400 systemd-journald[1135]: Collecting audit messages is disabled. Apr 13 22:57:05.242513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 22:57:05.242544 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 22:57:05.242559 systemd[1]: Stopped verity-setup.service. Apr 13 22:57:05.242573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 22:57:05.242588 systemd-journald[1135]: Journal started Apr 13 22:57:05.242662 systemd-journald[1135]: Runtime Journal (/run/log/journal/57b66e2c9d544325852e81f66eac64ec) is 6.0M, max 48.3M, 42.2M free. Apr 13 22:56:37.743587 systemd[1]: Queued start job for default target multi-user.target. Apr 13 22:56:44.134964 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 22:56:44.771824 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 22:57:05.286011 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 22:56:44.945685 systemd[1]: systemd-journald.service: Consumed 2.595s CPU time. Apr 13 22:57:05.321292 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 22:57:05.344831 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 22:57:05.404843 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 22:57:05.431774 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 22:57:05.514811 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 22:57:05.602077 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 22:57:05.705580 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 22:57:05.725090 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 22:57:05.735923 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 22:57:05.737316 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 22:57:05.830674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 22:57:05.851888 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 22:57:05.948075 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 22:57:05.969054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 22:57:06.020020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 22:57:06.043095 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 22:57:06.220874 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 22:57:06.298899 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 22:57:06.336984 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 22:57:06.353297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 22:57:06.401774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 22:57:06.425703 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 22:57:06.472292 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 22:57:06.910888 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 22:57:07.409902 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 22:57:07.480105 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 22:57:07.487427 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 22:57:07.487490 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 22:57:07.513240 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 22:57:07.624291 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 22:57:07.734370 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 22:57:07.739001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 22:57:07.921870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 22:57:08.016565 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 22:57:08.038102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 22:57:08.133427 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 22:57:08.170008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 22:57:08.228581 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 22:57:08.390303 systemd-journald[1135]: Time spent on flushing to /var/log/journal/57b66e2c9d544325852e81f66eac64ec is 544.308ms for 995 entries. Apr 13 22:57:08.390303 systemd-journald[1135]: System Journal (/var/log/journal/57b66e2c9d544325852e81f66eac64ec) is 8.0M, max 195.6M, 187.6M free. Apr 13 22:57:09.378960 systemd-journald[1135]: Received client request to flush runtime journal. Apr 13 22:57:09.382071 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 22:57:09.383393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 22:57:08.334900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 22:57:08.413426 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 22:57:08.438261 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 22:57:08.443035 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 22:57:08.495532 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 22:57:08.538001 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 22:57:08.597592 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 22:57:08.719677 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 22:57:09.386907 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 22:57:09.401998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 22:57:09.428757 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 22:57:09.475992 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 22:57:09.887820 kernel: loop1: detected capacity change from 0 to 228704 Apr 13 22:57:10.066825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 22:57:10.231617 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 22:57:10.256969 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 22:57:10.337483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 22:57:10.605455 kernel: loop2: detected capacity change from 0 to 140768 Apr 13 22:57:10.964566 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 22:57:11.288223 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 22:57:11.533718 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 13 22:57:11.573141 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 13 22:57:11.590962 kernel: loop4: detected capacity change from 0 to 228704 Apr 13 22:57:11.875968 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 22:57:12.199446 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 13 22:57:12.273974 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 13 22:57:12.366704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 22:57:12.528549 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 22:57:12.528630 systemd[1]: Reloading... Apr 13 22:57:16.603772 zram_generator::config[1226]: No configuration found. Apr 13 22:57:19.942099 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 22:57:49.902180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 22:58:08.618526 systemd[1]: Reloading finished in 56073 ms. Apr 13 22:58:24.024478 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 22:58:24.314607 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 22:58:27.296778 systemd[1]: Starting ensure-sysext.service... Apr 13 22:58:27.380185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 22:58:27.548250 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Apr 13 22:58:27.548329 systemd[1]: Reloading... Apr 13 22:58:28.266696 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 22:58:28.267044 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 22:58:28.267996 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 22:58:28.268298 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 13 22:58:28.268365 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 13 22:58:28.551327 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 22:58:28.580927 systemd-tmpfiles[1261]: Skipping /boot Apr 13 22:58:29.131375 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 22:58:29.131416 systemd-tmpfiles[1261]: Skipping /boot Apr 13 22:58:29.261722 zram_generator::config[1287]: No configuration found. Apr 13 22:58:38.475586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 22:58:39.008440 systemd[1]: Reloading finished in 11459 ms. Apr 13 22:58:39.277506 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 22:58:39.579881 systemd[1]: dev-disk-by\x2dlabel-OEM.device: Job dev-disk-by\x2dlabel-OEM.device/start timed out. Apr 13 22:58:39.579914 systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 22:58:39.588289 systemd[1]: Dependency failed for systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 22:58:39.592228 systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-OEM.service: Job systemd-fsck@dev-disk-by\x2dlabel-OEM.service/start failed with result 'dependency'. Apr 13 22:58:39.592253 systemd[1]: dev-disk-by\x2dlabel-OEM.device: Job dev-disk-by\x2dlabel-OEM.device/start failed with result 'timeout'. Apr 13 22:58:39.597149 systemd[1]: dev-ttyS0.device: Job dev-ttyS0.device/start timed out. Apr 13 22:58:39.597220 systemd[1]: Timed out waiting for device dev-ttyS0.device - /dev/ttyS0. Apr 13 22:58:39.614212 systemd[1]: Dependency failed for serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 22:58:39.634727 systemd[1]: serial-getty@ttyS0.service: Job serial-getty@ttyS0.service/start failed with result 'dependency'. Apr 13 22:58:39.634786 systemd[1]: dev-ttyS0.device: Job dev-ttyS0.device/start failed with result 'timeout'. Apr 13 22:58:39.635102 systemd[1]: systemd-hwdb-update.service: start operation timed out. Terminating. Apr 13 22:58:39.685853 systemd[1]: systemd-hwdb-update.service: Main process exited, code=killed, status=15/TERM Apr 13 22:58:39.733779 systemd[1]: systemd-hwdb-update.service: Failed with result 'timeout'. Apr 13 22:58:39.735770 systemd[1]: Failed to start systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 22:58:39.753517 systemd[1]: systemd-hwdb-update.service: Consumed 42.775s CPU time, 19.7M memory peak, 0B memory swap peak. Apr 13 22:58:40.493990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 22:58:40.791681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 22:58:40.951988 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 22:58:40.990666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 22:58:41.122105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 22:58:41.336650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 22:58:41.618528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 22:58:41.626185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 22:58:41.637294 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 22:58:41.986780 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 22:58:42.152377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 22:58:42.195370 augenrules[1349]: No rules Apr 13 22:58:42.196864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 22:58:42.220668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 22:58:42.353728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 22:58:42.379314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 22:58:42.407392 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 22:58:42.445232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 22:58:42.446100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 22:58:42.459202 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 22:58:42.459372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 22:58:42.631609 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 22:58:42.724969 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 22:58:42.796796 systemd[1]: Finished ensure-sysext.service. Apr 13 22:58:42.816352 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 22:58:42.857529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 22:58:42.858104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 22:58:42.921857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 22:58:43.010827 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 22:58:43.151147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 22:58:43.247030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 22:58:43.298344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 22:58:43.389748 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 22:58:43.468735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 22:58:43.482056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 22:58:43.495521 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 22:58:43.649993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 22:58:43.650487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 22:58:43.673261 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 22:58:43.673454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 22:58:43.729971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 22:58:43.744010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 22:58:43.789809 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 22:58:43.804953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 22:58:43.972875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 22:58:43.973041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 22:58:43.973085 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 22:58:44.404343 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 22:58:45.172092 systemd-resolved[1347]: Positive Trust Anchors: Apr 13 22:58:45.172152 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 22:58:45.172189 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 22:58:45.246801 systemd-resolved[1347]: Defaulting to hostname 'linux'. Apr 13 22:58:45.327988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 22:58:45.381388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 22:58:45.409231 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 22:58:45.420261 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 22:58:54.252941 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 22:58:54.326694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 22:58:54.543086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 22:58:55.227230 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 22:58:55.728412 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Apr 13 22:58:56.420220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 22:58:56.613642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 22:58:57.022948 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 22:58:57.409552 systemd-networkd[1389]: lo: Link UP Apr 13 22:58:57.409565 systemd-networkd[1389]: lo: Gained carrier Apr 13 22:58:57.410459 systemd-networkd[1389]: Enumeration completed Apr 13 22:58:57.417275 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 22:58:57.425377 systemd[1]: Reached target network.target - Network. Apr 13 22:58:57.540398 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 22:58:58.595325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1395) Apr 13 22:58:58.970591 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 22:58:58.971695 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 22:58:59.026831 systemd-networkd[1389]: eth0: Link UP Apr 13 22:58:59.026858 systemd-networkd[1389]: eth0: Gained carrier Apr 13 22:58:59.026885 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 22:58:59.134831 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 22:58:59.182518 kernel: ACPI: button: Power Button [PWRF] Apr 13 22:58:59.253438 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 22:58:59.261106 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Apr 13 22:58:59.315111 systemd-timesyncd[1368]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 13 22:58:59.320751 systemd-timesyncd[1368]: Initial clock synchronization to Mon 2026-04-13 22:58:59.658306 UTC. Apr 13 22:58:59.868392 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 22:58:59.960378 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 22:58:59.981655 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 22:58:59.996376 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 22:58:59.997284 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 22:59:00.698361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 22:59:00.843758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 22:59:00.844074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 22:59:00.914356 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 22:59:00.915751 systemd-networkd[1389]: eth0: Gained IPv6LL Apr 13 22:59:00.934835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 22:59:01.043588 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 22:59:01.694758 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 22:59:03.637080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 22:59:06.173945 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 22:59:06.595042 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 22:59:07.094433 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 22:59:08.080707 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 22:59:08.190980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 22:59:08.243968 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 22:59:08.481317 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 22:59:08.533664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 22:59:08.574400 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 22:59:08.604388 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 22:59:08.608733 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 22:59:08.622572 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 22:59:08.622809 systemd[1]: Reached target paths.target - Path Units. Apr 13 22:59:08.658475 systemd[1]: Reached target timers.target - Timer Units. Apr 13 22:59:08.761635 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 22:59:09.085849 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 22:59:09.362740 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 22:59:09.602422 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 22:59:09.692409 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 22:59:09.696909 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 22:59:09.761478 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 22:59:09.755516 systemd[1]: Reached target basic.target - Basic System. Apr 13 22:59:09.791371 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 22:59:09.791496 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 22:59:09.872402 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 22:59:09.993959 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 22:59:10.050928 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 22:59:10.063894 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 22:59:10.127002 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 22:59:10.130558 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 22:59:10.166225 jq[1437]: false Apr 13 22:59:10.168211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 22:59:10.213548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 22:59:10.242097 extend-filesystems[1438]: Found loop3 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found loop4 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found loop5 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found sr0 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found vda Apr 13 22:59:10.242097 extend-filesystems[1438]: Found vda1 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found vda2 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found vda3 Apr 13 22:59:10.242097 extend-filesystems[1438]: Found usr Apr 13 22:59:10.242097 extend-filesystems[1438]: Found vda4 Apr 13 22:59:10.279835 dbus-daemon[1436]: [system] SELinux support is enabled Apr 13 22:59:10.505787 extend-filesystems[1438]: Found vda6 Apr 13 22:59:10.505787 extend-filesystems[1438]: Found vda7 Apr 13 22:59:10.505787 extend-filesystems[1438]: Found vda9 Apr 13 22:59:10.505787 extend-filesystems[1438]: Checking size of /dev/vda9 Apr 13 22:59:10.268171 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 22:59:10.593452 extend-filesystems[1438]: Resized partition /dev/vda9 Apr 13 22:59:10.619194 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 13 22:59:10.619341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1461) Apr 13 22:59:10.445903 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 22:59:10.643588 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Apr 13 22:59:10.479534 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 22:59:10.558824 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 22:59:10.630393 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 22:59:10.644520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 22:59:10.647751 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 22:59:10.655834 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 22:59:10.711495 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 22:59:10.738866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 22:59:10.915818 jq[1471]: true Apr 13 22:59:10.826678 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 22:59:11.031412 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 13 22:59:10.955237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 22:59:11.003315 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 22:59:11.041894 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 22:59:11.042169 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 22:59:11.070267 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 22:59:11.110185 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 22:59:11.135935 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 22:59:11.334813 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 22:59:11.334813 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 13 22:59:11.334813 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 13 22:59:11.581281 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Apr 13 22:59:11.595375 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 22:59:11.595721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 22:59:11.691392 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 22:59:11.782214 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 22:59:11.782553 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 22:59:11.794862 jq[1480]: true Apr 13 22:59:11.811972 update_engine[1469]: I20260413 22:59:11.810447 1469 main.cc:92] Flatcar Update Engine starting Apr 13 22:59:12.032826 tar[1478]: linux-amd64/LICENSE Apr 13 22:59:12.032826 tar[1478]: linux-amd64/helm Apr 13 22:59:12.181350 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 22:59:12.183949 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 22:59:12.183998 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 22:59:12.204442 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 22:59:12.204485 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 22:59:12.239670 systemd[1]: Started update-engine.service - Update Engine. Apr 13 22:59:12.388715 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 22:59:12.435546 update_engine[1469]: I20260413 22:59:12.338827 1469 update_check_scheduler.cc:74] Next update check in 7m12s Apr 13 22:59:12.532771 systemd-logind[1465]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 22:59:12.532793 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 22:59:12.625451 systemd-logind[1465]: New seat seat0. Apr 13 22:59:12.651758 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 22:59:12.881076 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Apr 13 22:59:12.873749 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 22:59:12.935036 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 22:59:13.313068 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 22:59:13.621906 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 22:59:15.141170 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 22:59:15.352095 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 22:59:16.346364 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 22:59:16.378394 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:55756.service - OpenSSH per-connection server daemon (10.0.0.1:55756). Apr 13 22:59:16.527480 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 22:59:16.527792 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 22:59:16.739750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 22:59:17.560315 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 22:59:17.579064 containerd[1482]: time="2026-04-13T22:59:17.578760338Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 22:59:17.673432 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 22:59:17.740882 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 22:59:17.767774 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 22:59:17.877317 containerd[1482]: time="2026-04-13T22:59:17.850334217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.877317 containerd[1482]: time="2026-04-13T22:59:17.853273251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 22:59:17.877317 containerd[1482]: time="2026-04-13T22:59:17.853313627Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 22:59:17.877317 containerd[1482]: time="2026-04-13T22:59:17.853332027Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 22:59:17.894419 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 55756 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.895419181Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.898740692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.899017522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.899040957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.899524245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.899550089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.899568509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 22:59:17.899715 containerd[1482]: time="2026-04-13T22:59:17.899603096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.900038 containerd[1482]: time="2026-04-13T22:59:17.899759631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.900195 containerd[1482]: time="2026-04-13T22:59:17.900117629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 22:59:17.900356 containerd[1482]: time="2026-04-13T22:59:17.900307305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 22:59:17.900356 containerd[1482]: time="2026-04-13T22:59:17.900339308Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 22:59:17.900485 containerd[1482]: time="2026-04-13T22:59:17.900450774Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 22:59:17.900650 containerd[1482]: time="2026-04-13T22:59:17.900603633Z" level=info msg="metadata content store policy set" policy=shared Apr 13 22:59:17.917594 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:17.954261 containerd[1482]: time="2026-04-13T22:59:17.953605674Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 22:59:18.005751 containerd[1482]: time="2026-04-13T22:59:17.962983031Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 22:59:18.005751 containerd[1482]: time="2026-04-13T22:59:17.963105966Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 22:59:18.005751 containerd[1482]: time="2026-04-13T22:59:17.963236002Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 22:59:18.005751 containerd[1482]: time="2026-04-13T22:59:17.963336711Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 22:59:18.005751 containerd[1482]: time="2026-04-13T22:59:18.004719062Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 22:59:18.017416 containerd[1482]: time="2026-04-13T22:59:18.006884725Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 22:59:18.017798 containerd[1482]: time="2026-04-13T22:59:18.017665889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 22:59:18.017798 containerd[1482]: time="2026-04-13T22:59:18.017735899Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 22:59:18.017798 containerd[1482]: time="2026-04-13T22:59:18.017757941Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 22:59:18.018080 containerd[1482]: time="2026-04-13T22:59:18.017924107Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018080 containerd[1482]: time="2026-04-13T22:59:18.017948020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018080 containerd[1482]: time="2026-04-13T22:59:18.017962773Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018080 containerd[1482]: time="2026-04-13T22:59:18.018012524Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018080 containerd[1482]: time="2026-04-13T22:59:18.018030080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018080 containerd[1482]: time="2026-04-13T22:59:18.018044406Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018450 containerd[1482]: time="2026-04-13T22:59:18.018282525Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018450 containerd[1482]: time="2026-04-13T22:59:18.018302052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 22:59:18.018450 containerd[1482]: time="2026-04-13T22:59:18.018369707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018450 containerd[1482]: time="2026-04-13T22:59:18.018389188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018450 containerd[1482]: time="2026-04-13T22:59:18.018405421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018687 containerd[1482]: time="2026-04-13T22:59:18.018422938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018687 containerd[1482]: time="2026-04-13T22:59:18.018627351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018687 containerd[1482]: time="2026-04-13T22:59:18.018645609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018881 containerd[1482]: time="2026-04-13T22:59:18.018660225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018881 containerd[1482]: time="2026-04-13T22:59:18.018810718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.018881 containerd[1482]: time="2026-04-13T22:59:18.018828667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019093 containerd[1482]: time="2026-04-13T22:59:18.018981233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019093 containerd[1482]: time="2026-04-13T22:59:18.018999952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019093 containerd[1482]: time="2026-04-13T22:59:18.019014942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019416 containerd[1482]: time="2026-04-13T22:59:18.019237440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019416 containerd[1482]: time="2026-04-13T22:59:18.019262519Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 22:59:18.019416 containerd[1482]: time="2026-04-13T22:59:18.019335754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019416 containerd[1482]: time="2026-04-13T22:59:18.019354350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.019416 containerd[1482]: time="2026-04-13T22:59:18.019370743Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 22:59:18.019844 containerd[1482]: time="2026-04-13T22:59:18.019690363Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 22:59:18.020012 containerd[1482]: time="2026-04-13T22:59:18.019938703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 22:59:18.020012 containerd[1482]: time="2026-04-13T22:59:18.019957559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 22:59:18.020012 containerd[1482]: time="2026-04-13T22:59:18.019973435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 22:59:18.020012 containerd[1482]: time="2026-04-13T22:59:18.019984767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.082973 containerd[1482]: time="2026-04-13T22:59:18.046237562Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 22:59:18.082973 containerd[1482]: time="2026-04-13T22:59:18.046563854Z" level=info msg="NRI interface is disabled by configuration." Apr 13 22:59:18.082973 containerd[1482]: time="2026-04-13T22:59:18.046588512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 22:59:18.096239 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 22:59:18.154839 containerd[1482]: time="2026-04-13T22:59:18.098209719Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 22:59:18.154839 containerd[1482]: time="2026-04-13T22:59:18.098286712Z" level=info msg="Connect containerd service" Apr 13 22:59:18.154839 containerd[1482]: time="2026-04-13T22:59:18.098445782Z" level=info msg="using legacy CRI server" Apr 13 22:59:18.154839 containerd[1482]: time="2026-04-13T22:59:18.098452945Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 22:59:18.154839 containerd[1482]: time="2026-04-13T22:59:18.104764254Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 22:59:18.238664 containerd[1482]: time="2026-04-13T22:59:18.238516503Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241097570Z" level=info msg="Start subscribing containerd event" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241419688Z" level=info msg="Start recovering state" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241530366Z" level=info msg="Start event monitor" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241578422Z" level=info msg="Start snapshots syncer" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241591403Z" level=info msg="Start cni network conf syncer for default" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241602227Z" level=info msg="Start streaming server" Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241734207Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241781389Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 22:59:18.241899 containerd[1482]: time="2026-04-13T22:59:18.241828836Z" level=info msg="containerd successfully booted in 0.685480s" Apr 13 22:59:18.277816 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 22:59:18.448557 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 22:59:18.558780 systemd-logind[1465]: New session 1 of user core. Apr 13 22:59:19.099423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 22:59:19.372993 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 22:59:19.601582 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 22:59:21.860092 systemd[1554]: Queued start job for default target default.target. Apr 13 22:59:21.958686 systemd[1554]: Created slice app.slice - User Application Slice. Apr 13 22:59:21.958737 systemd[1554]: Reached target paths.target - Paths. Apr 13 22:59:21.958755 systemd[1554]: Reached target timers.target - Timers. Apr 13 22:59:22.074887 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 22:59:22.269630 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 22:59:22.270870 systemd[1554]: Reached target sockets.target - Sockets. Apr 13 22:59:22.270892 systemd[1554]: Reached target basic.target - Basic System. Apr 13 22:59:22.270951 systemd[1554]: Reached target default.target - Main User Target. Apr 13 22:59:22.270985 systemd[1554]: Startup finished in 2.349s. Apr 13 22:59:22.273323 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 22:59:22.675927 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 22:59:23.609498 tar[1478]: linux-amd64/README.md Apr 13 22:59:23.615454 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:45514.service - OpenSSH per-connection server daemon (10.0.0.1:45514). Apr 13 22:59:24.111642 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 22:59:24.180486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 22:59:24.324632 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 22:59:24.348342 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 22:59:24.353089 systemd[1]: Startup finished in 7.500s (kernel) + 1min 36.086s (initrd) + 4min 42.811s (userspace) = 6min 26.398s. Apr 13 22:59:25.385839 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 45514 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:25.672051 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:25.910110 systemd-logind[1465]: New session 2 of user core. Apr 13 22:59:25.978349 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 22:59:26.572629 sshd[1569]: pam_unix(sshd:session): session closed for user core Apr 13 22:59:26.719718 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:34334.service - OpenSSH per-connection server daemon (10.0.0.1:34334). Apr 13 22:59:26.723445 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:45514.service: Deactivated successfully. Apr 13 22:59:26.731721 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 22:59:26.849882 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. Apr 13 22:59:26.924862 systemd-logind[1465]: Removed session 2. Apr 13 22:59:27.177099 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 34334 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:27.189027 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:27.531156 systemd-logind[1465]: New session 3 of user core. Apr 13 22:59:27.568750 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 22:59:27.839712 sshd[1588]: pam_unix(sshd:session): session closed for user core Apr 13 22:59:27.931572 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:34334.service: Deactivated successfully. Apr 13 22:59:27.983422 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 22:59:27.987849 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. Apr 13 22:59:27.998592 systemd-logind[1465]: Removed session 3. Apr 13 22:59:28.057040 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:34336.service - OpenSSH per-connection server daemon (10.0.0.1:34336). Apr 13 22:59:28.467316 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 34336 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:28.469704 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:28.653642 systemd-logind[1465]: New session 4 of user core. Apr 13 22:59:28.667789 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 22:59:29.163260 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 13 22:59:29.510365 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:34336.service: Deactivated successfully. Apr 13 22:59:29.524668 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 22:59:29.593535 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Apr 13 22:59:29.741494 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:34338.service - OpenSSH per-connection server daemon (10.0.0.1:34338). Apr 13 22:59:29.769064 systemd-logind[1465]: Removed session 4. Apr 13 22:59:29.973303 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 34338 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:29.975823 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:30.097885 systemd-logind[1465]: New session 5 of user core. Apr 13 22:59:30.107560 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 22:59:30.519366 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 22:59:30.522439 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 22:59:30.650216 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 13 22:59:30.774364 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 13 22:59:30.905856 kubelet[1576]: E0413 22:59:30.903425 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 22:59:30.923635 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:34338.service: Deactivated successfully. Apr 13 22:59:31.123208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 22:59:31.123730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 22:59:31.126798 systemd[1]: kubelet.service: Consumed 4.640s CPU time. Apr 13 22:59:31.139522 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 22:59:31.273543 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Apr 13 22:59:31.467197 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:34352.service - OpenSSH per-connection server daemon (10.0.0.1:34352). Apr 13 22:59:31.494833 systemd-logind[1465]: Removed session 5. Apr 13 22:59:32.501734 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34352 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:32.695520 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:32.867244 systemd-logind[1465]: New session 6 of user core. Apr 13 22:59:32.936673 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 22:59:33.818242 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 22:59:33.877333 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 22:59:34.138509 sudo[1619]: pam_unix(sudo:session): session closed for user root Apr 13 22:59:34.280571 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 22:59:34.280944 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 22:59:36.684178 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 22:59:37.002092 auditctl[1622]: No rules Apr 13 22:59:36.986080 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 22:59:36.986416 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 22:59:37.116499 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 22:59:37.689701 augenrules[1640]: No rules Apr 13 22:59:37.797025 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 22:59:37.858549 sudo[1618]: pam_unix(sudo:session): session closed for user root Apr 13 22:59:37.866869 sshd[1615]: pam_unix(sshd:session): session closed for user core Apr 13 22:59:38.222665 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:34352.service: Deactivated successfully. Apr 13 22:59:38.288424 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 22:59:38.312933 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Apr 13 22:59:38.397376 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:58238.service - OpenSSH per-connection server daemon (10.0.0.1:58238). Apr 13 22:59:38.435726 systemd-logind[1465]: Removed session 6. Apr 13 22:59:39.057459 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 58238 ssh2: RSA SHA256:bOz6LmPSBJV0R+gY2r5G2pYVoFmOMJji6gPwPENABkI Apr 13 22:59:39.108920 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 22:59:39.439826 systemd-logind[1465]: New session 7 of user core. Apr 13 22:59:39.476784 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 22:59:39.904976 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 22:59:39.917246 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 22:59:41.244355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 22:59:43.049247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 22:59:50.965305 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 22:59:50.969982 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 22:59:52.347833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 22:59:52.482482 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 22:59:57.993997 update_engine[1469]: I20260413 22:59:57.983471 1469 update_attempter.cc:509] Updating boot flags... Apr 13 22:59:58.527697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1701) Apr 13 22:59:59.458906 kubelet[1680]: E0413 22:59:59.456196 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 22:59:59.516959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 22:59:59.517392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 22:59:59.517740 systemd[1]: kubelet.service: Consumed 3.930s CPU time. Apr 13 23:00:00.008075 dockerd[1675]: time="2026-04-13T22:59:59.986704116Z" level=info msg="Starting up" Apr 13 23:00:00.044036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1701) Apr 13 23:00:06.180703 dockerd[1675]: time="2026-04-13T23:00:06.123902515Z" level=info msg="Loading containers: start." Apr 13 23:00:09.910947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 23:00:10.081489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:00:13.693751 kernel: Initializing XFRM netlink socket Apr 13 23:00:14.440360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:00:14.496010 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:00:18.442901 kubelet[1784]: E0413 23:00:18.436773 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:00:18.511805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:00:18.516998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:00:18.536623 systemd[1]: kubelet.service: Consumed 1.975s CPU time. Apr 13 23:00:20.016108 systemd-networkd[1389]: docker0: Link UP Apr 13 23:00:20.936734 dockerd[1675]: time="2026-04-13T23:00:20.936381854Z" level=info msg="Loading containers: done." Apr 13 23:00:22.928662 dockerd[1675]: time="2026-04-13T23:00:22.927061541Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 23:00:22.928662 dockerd[1675]: time="2026-04-13T23:00:22.928106372Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 23:00:22.964556 dockerd[1675]: time="2026-04-13T23:00:22.941551241Z" level=info msg="Daemon has completed initialization" Apr 13 23:00:23.285016 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3090436683-merged.mount: Deactivated successfully. Apr 13 23:00:26.105256 dockerd[1675]: time="2026-04-13T23:00:26.040998640Z" level=info msg="API listen on /run/docker.sock" Apr 13 23:00:26.207533 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 23:00:29.206072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 23:00:29.901638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:00:36.548723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:00:36.713398 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:00:39.714695 kubelet[1878]: E0413 23:00:39.712790 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:00:39.729601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:00:39.730059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:00:39.739648 systemd[1]: kubelet.service: Consumed 2.587s CPU time. Apr 13 23:00:40.208427 containerd[1482]: time="2026-04-13T23:00:40.208296550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 23:00:50.293009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 23:00:54.949997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:00:58.837001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836136496.mount: Deactivated successfully. Apr 13 23:01:08.321431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:01:08.552620 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:01:13.123351 kubelet[1913]: E0413 23:01:13.122779 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:01:13.158615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:01:13.158851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:01:13.159328 systemd[1]: kubelet.service: Consumed 6.928s CPU time. Apr 13 23:01:25.987902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 23:01:26.779109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:01:52.307978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:01:52.592294 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:02:34.174760 kubelet[1970]: E0413 23:02:34.148941 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:02:34.203990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:02:34.214748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:02:34.218636 systemd[1]: kubelet.service: Consumed 36.831s CPU time. Apr 13 23:02:34.824396 containerd[1482]: time="2026-04-13T23:02:34.816987214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:02:34.842675 containerd[1482]: time="2026-04-13T23:02:34.834678885Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 13 23:02:34.860094 containerd[1482]: time="2026-04-13T23:02:34.859058285Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:02:36.225497 containerd[1482]: time="2026-04-13T23:02:36.219091003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:02:36.247690 containerd[1482]: time="2026-04-13T23:02:36.246381692Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 1m56.037908985s" Apr 13 23:02:36.247690 containerd[1482]: time="2026-04-13T23:02:36.247244975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 23:02:36.341682 containerd[1482]: time="2026-04-13T23:02:36.252596263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 23:02:46.943941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 13 23:02:47.971059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:02:55.577934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:02:56.079728 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:02:58.158093 kubelet[1994]: E0413 23:02:58.155853 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:02:58.242252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:02:58.242884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:02:58.247412 systemd[1]: kubelet.service: Consumed 2.945s CPU time. Apr 13 23:03:02.729797 containerd[1482]: time="2026-04-13T23:03:02.728469354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:03:02.741447 containerd[1482]: time="2026-04-13T23:03:02.736478745Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 13 23:03:02.741447 containerd[1482]: time="2026-04-13T23:03:02.739060996Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:03:02.808922 containerd[1482]: time="2026-04-13T23:03:02.804830717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:03:02.814383 containerd[1482]: time="2026-04-13T23:03:02.814270358Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 26.56162836s" Apr 13 23:03:02.814628 containerd[1482]: time="2026-04-13T23:03:02.814395509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 23:03:02.903602 containerd[1482]: time="2026-04-13T23:03:02.897518671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 23:03:08.513545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 13 23:03:08.547276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:03:09.794794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:03:09.850294 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:03:10.059791 containerd[1482]: time="2026-04-13T23:03:10.050916039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:03:10.059791 containerd[1482]: time="2026-04-13T23:03:10.059179635Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 13 23:03:10.080049 containerd[1482]: time="2026-04-13T23:03:10.079923072Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:03:10.109546 containerd[1482]: time="2026-04-13T23:03:10.107919519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:03:10.109546 containerd[1482]: time="2026-04-13T23:03:10.116981677Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 7.218415274s" Apr 13 23:03:10.109546 containerd[1482]: time="2026-04-13T23:03:10.117290338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 23:03:10.126799 containerd[1482]: time="2026-04-13T23:03:10.126692044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 23:03:10.164013 kubelet[2015]: E0413 23:03:10.163410 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:03:10.202761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:03:10.205038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:03:21.523546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 13 23:03:21.568606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:03:28.317214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:03:28.369648 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:03:31.065352 kubelet[2036]: E0413 23:03:31.034748 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:03:31.097960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:03:31.098575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:03:31.161093 systemd[1]: kubelet.service: Consumed 4.936s CPU time. Apr 13 23:03:41.181110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 13 23:03:41.209012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:03:45.006011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:03:45.113385 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:03:51.588086 kubelet[2051]: E0413 23:03:51.581043 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:03:51.796459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:03:51.796919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:03:51.821584 systemd[1]: kubelet.service: Consumed 6.248s CPU time. Apr 13 23:04:02.524397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 13 23:04:03.132822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:04:18.334766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:04:19.170099 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:04:20.618933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676553915.mount: Deactivated successfully. Apr 13 23:04:24.044668 kubelet[2072]: E0413 23:04:24.041159 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:04:24.193955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:04:24.215634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:04:24.252840 systemd[1]: kubelet.service: Consumed 9.536s CPU time. Apr 13 23:04:34.287177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 13 23:04:34.333595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:04:37.887173 containerd[1482]: time="2026-04-13T23:04:37.836465401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:04:37.887173 containerd[1482]: time="2026-04-13T23:04:37.870818307Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 13 23:04:37.946655 containerd[1482]: time="2026-04-13T23:04:37.945764828Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:04:38.448522 containerd[1482]: time="2026-04-13T23:04:38.447580559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:04:38.507499 containerd[1482]: time="2026-04-13T23:04:38.504873898Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1m28.378097084s" Apr 13 23:04:38.531499 containerd[1482]: time="2026-04-13T23:04:38.507275584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 23:04:38.531499 containerd[1482]: time="2026-04-13T23:04:38.530950069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 23:04:39.973841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:04:40.010750 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:04:47.409867 kubelet[2092]: E0413 23:04:47.408196 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:04:47.527726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:04:47.528425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:04:47.536815 systemd[1]: kubelet.service: Consumed 6.830s CPU time. Apr 13 23:04:47.970959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230601734.mount: Deactivated successfully. Apr 13 23:04:58.295540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 13 23:04:58.961057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:05:13.390558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:05:13.634668 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:05:28.646549 kubelet[2121]: E0413 23:05:28.645927 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:05:28.676581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:05:28.776782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:05:28.860706 systemd[1]: kubelet.service: Consumed 16.714s CPU time. Apr 13 23:05:39.556485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 13 23:05:40.481926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:05:58.706037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:05:58.797499 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:06:18.893338 kubelet[2138]: E0413 23:06:18.768663 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:06:18.914812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:06:18.923722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:06:18.929551 systemd[1]: kubelet.service: Consumed 21.659s CPU time. Apr 13 23:06:25.050063 update_engine[1469]: I20260413 23:06:25.039536 1469 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 23:06:25.320226 update_engine[1469]: I20260413 23:06:25.055039 1469 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 23:06:25.320226 update_engine[1469]: I20260413 23:06:25.185062 1469 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 23:06:25.364739 update_engine[1469]: I20260413 23:06:25.320912 1469 omaha_request_params.cc:62] Current group set to lts Apr 13 23:06:25.403489 update_engine[1469]: I20260413 23:06:25.387168 1469 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 23:06:25.403489 update_engine[1469]: I20260413 23:06:25.397693 1469 update_attempter.cc:643] Scheduling an action processor start. Apr 13 23:06:25.419879 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 23:06:25.543090 update_engine[1469]: I20260413 23:06:25.421720 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 23:06:25.543090 update_engine[1469]: I20260413 23:06:25.502912 1469 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 23:06:25.543090 update_engine[1469]: I20260413 23:06:25.527001 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 23:06:25.543090 update_engine[1469]: I20260413 23:06:25.528793 1469 omaha_request_action.cc:272] Request: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: Apr 13 23:06:25.543090 update_engine[1469]: I20260413 23:06:25.529325 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:06:26.138943 update_engine[1469]: I20260413 23:06:26.112672 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:06:26.482331 update_engine[1469]: I20260413 23:06:26.466640 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:06:26.707843 update_engine[1469]: E20260413 23:06:26.674144 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:06:26.892966 update_engine[1469]: I20260413 23:06:26.851377 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 23:06:29.366803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 13 23:06:30.288540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:06:36.985572 update_engine[1469]: I20260413 23:06:36.969014 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:06:37.290048 update_engine[1469]: I20260413 23:06:37.167927 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:06:37.290048 update_engine[1469]: I20260413 23:06:37.218575 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:06:37.290048 update_engine[1469]: E20260413 23:06:37.249847 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:06:37.290048 update_engine[1469]: I20260413 23:06:37.263051 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 23:06:43.000029 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1216868073 wd_nsec: 1216868586 Apr 13 23:06:45.440616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:06:45.873908 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:06:47.995755 update_engine[1469]: I20260413 23:06:47.976650 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:06:48.170677 update_engine[1469]: I20260413 23:06:48.017241 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:06:48.170677 update_engine[1469]: I20260413 23:06:48.034989 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:06:48.170677 update_engine[1469]: E20260413 23:06:48.075959 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:06:48.170677 update_engine[1469]: I20260413 23:06:48.092815 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 23:06:58.001478 update_engine[1469]: I20260413 23:06:57.975739 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:06:58.117768 update_engine[1469]: I20260413 23:06:58.111060 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:06:58.155598 update_engine[1469]: I20260413 23:06:58.131388 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:06:58.172750 update_engine[1469]: E20260413 23:06:58.161249 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:06:58.172750 update_engine[1469]: I20260413 23:06:58.161653 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 23:06:58.172750 update_engine[1469]: I20260413 23:06:58.163481 1469 omaha_request_action.cc:617] Omaha request response: Apr 13 23:06:58.172750 update_engine[1469]: E20260413 23:06:58.164805 1469 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.178625 1469 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.203996 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.208640 1469 update_attempter.cc:306] Processing Done. Apr 13 23:06:58.352382 update_engine[1469]: E20260413 23:06:58.209105 1469 update_attempter.cc:619] Update failed. Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.215581 1469 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.219836 1469 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.230248 1469 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.285975 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.320564 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.320768 1469 omaha_request_action.cc:272] Request: Apr 13 23:06:58.352382 update_engine[1469]: Apr 13 23:06:58.352382 update_engine[1469]: Apr 13 23:06:58.352382 update_engine[1469]: Apr 13 23:06:58.352382 update_engine[1469]: Apr 13 23:06:58.352382 update_engine[1469]: Apr 13 23:06:58.352382 update_engine[1469]: Apr 13 23:06:58.352382 update_engine[1469]: I20260413 23:06:58.320780 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.453092 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.522562 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:06:59.394597 update_engine[1469]: E20260413 23:06:58.597781 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.609520 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.611354 1469 omaha_request_action.cc:617] Omaha request response: Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.611506 1469 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.611515 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.611521 1469 update_attempter.cc:306] Processing Done. Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.611558 1469 update_attempter.cc:310] Error event sent. Apr 13 23:06:59.394597 update_engine[1469]: I20260413 23:06:58.611702 1469 update_check_scheduler.cc:74] Next update check in 47m34s Apr 13 23:06:59.418649 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 23:06:59.418649 locksmithd[1506]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 23:07:13.971337 containerd[1482]: time="2026-04-13T23:07:13.958602385Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 13 23:07:14.115596 containerd[1482]: time="2026-04-13T23:07:13.958541500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:07:14.814792 containerd[1482]: time="2026-04-13T23:07:14.807892211Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:07:17.690093 containerd[1482]: time="2026-04-13T23:07:17.681107466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:07:18.443458 containerd[1482]: time="2026-04-13T23:07:18.411935970Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2m39.878208184s" Apr 13 23:07:18.443458 containerd[1482]: time="2026-04-13T23:07:18.415811288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 23:07:18.609758 containerd[1482]: time="2026-04-13T23:07:18.579292759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 23:07:30.404611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332050751.mount: Deactivated successfully. Apr 13 23:07:30.432208 kubelet[2195]: E0413 23:07:30.411953 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:07:30.452536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:07:30.478504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:07:30.498229 systemd[1]: kubelet.service: Consumed 37.093s CPU time. Apr 13 23:07:30.647596 containerd[1482]: time="2026-04-13T23:07:30.646273566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:07:30.664384 containerd[1482]: time="2026-04-13T23:07:30.664262177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 13 23:07:30.742866 containerd[1482]: time="2026-04-13T23:07:30.739796270Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:07:31.116105 containerd[1482]: time="2026-04-13T23:07:31.104718986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:07:31.133398 containerd[1482]: time="2026-04-13T23:07:31.129259063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 12.525689454s" Apr 13 23:07:31.133398 containerd[1482]: time="2026-04-13T23:07:31.129414893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 23:07:31.142665 containerd[1482]: time="2026-04-13T23:07:31.142588334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 23:07:35.792373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318135245.mount: Deactivated successfully. Apr 13 23:07:40.702413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 13 23:07:40.861927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:07:43.673853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:07:43.831546 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:07:45.143954 kubelet[2232]: E0413 23:07:45.143373 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:07:45.226293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:07:45.243693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:07:45.278390 systemd[1]: kubelet.service: Consumed 2.462s CPU time. Apr 13 23:07:56.515717 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 13 23:07:57.076066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:07:59.272468 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 13 23:08:01.562627 systemd-tmpfiles[2242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:08:01.571606 systemd-tmpfiles[2242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:08:01.572365 systemd-tmpfiles[2242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:08:01.572542 systemd-tmpfiles[2242]: ACLs are not supported, ignoring. Apr 13 23:08:01.572592 systemd-tmpfiles[2242]: ACLs are not supported, ignoring. Apr 13 23:08:01.643355 systemd-tmpfiles[2242]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:08:01.643377 systemd-tmpfiles[2242]: Skipping /boot Apr 13 23:08:02.383288 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 13 23:08:02.394709 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 13 23:08:02.452748 systemd[1]: systemd-tmpfiles-clean.service: Consumed 1.700s CPU time. Apr 13 23:08:17.046616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:08:17.285634 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:08:36.959082 kubelet[2252]: E0413 23:08:36.957837 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:08:37.010509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:08:37.021103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:08:37.037888 systemd[1]: kubelet.service: Consumed 20.340s CPU time. Apr 13 23:08:47.414407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 13 23:08:47.821250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:08:54.391277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:08:54.534008 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:08:56.825233 containerd[1482]: time="2026-04-13T23:08:56.813264177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:08:56.894577 containerd[1482]: time="2026-04-13T23:08:56.864371138Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 13 23:08:57.097709 containerd[1482]: time="2026-04-13T23:08:57.087559474Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:08:57.292859 containerd[1482]: time="2026-04-13T23:08:57.291710857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:08:57.295752 containerd[1482]: time="2026-04-13T23:08:57.295702515Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1m26.153060639s" Apr 13 23:08:57.302674 containerd[1482]: time="2026-04-13T23:08:57.295865583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 23:08:58.781211 kubelet[2316]: E0413 23:08:58.706742 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:08:58.794420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:08:58.794635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:08:58.822459 systemd[1]: kubelet.service: Consumed 5.405s CPU time. Apr 13 23:09:10.046708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 13 23:09:11.504064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:09:24.426141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:09:24.464556 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:09:30.296937 kubelet[2361]: E0413 23:09:30.288748 2361 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:09:30.420360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:09:30.487096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:09:30.552049 systemd[1]: kubelet.service: Consumed 7.938s CPU time, 105.0M memory peak, 0B memory swap peak. Apr 13 23:09:42.263987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 13 23:09:42.412220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:10:24.169737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:10:27.698331 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:11:12.850806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:11:13.183547 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:11:13.202725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:11:13.311884 systemd[1]: kubelet.service: Consumed 46.763s CPU time, 112.9M memory peak, 0B memory swap peak. Apr 13 23:11:19.388863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:11:30.354495 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... Apr 13 23:11:30.362032 systemd[1]: Reloading... Apr 13 23:11:39.450276 zram_generator::config[2437]: No configuration found. Apr 13 23:12:01.096567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:13:14.932943 systemd[1]: Reloading finished in 104558 ms. Apr 13 23:13:15.701818 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:13:15.703857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:13:15.704312 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:13:15.710525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:13:15.711452 systemd[1]: kubelet.service: Consumed 8.578s CPU time, 22.0M memory peak, 0B memory swap peak. Apr 13 23:13:15.730406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:13:25.361540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:13:25.513929 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:13:29.885257 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:13:29.885257 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:13:29.885257 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:13:29.909887 kubelet[2491]: I0413 23:13:29.885761 2491 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:13:38.294934 kubelet[2491]: I0413 23:13:38.291797 2491 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 23:13:38.294934 kubelet[2491]: I0413 23:13:38.292936 2491 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:13:38.294934 kubelet[2491]: I0413 23:13:38.294241 2491 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:13:39.577672 kubelet[2491]: E0413 23:13:39.577240 2491 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:13:39.670046 kubelet[2491]: I0413 23:13:39.669428 2491 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:13:40.495853 kubelet[2491]: E0413 23:13:40.456785 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:13:40.524434 kubelet[2491]: I0413 23:13:40.502647 2491 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 23:13:41.397629 kubelet[2491]: I0413 23:13:41.394264 2491 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 23:13:41.414515 kubelet[2491]: I0413 23:13:41.414422 2491 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:13:41.415468 kubelet[2491]: I0413 23:13:41.414513 2491 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:13:41.415745 kubelet[2491]: I0413 23:13:41.415485 2491 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:13:41.415745 kubelet[2491]: I0413 23:13:41.415502 2491 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 23:13:41.535540 kubelet[2491]: I0413 23:13:41.509293 2491 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:13:41.660956 kubelet[2491]: I0413 23:13:41.660560 2491 kubelet.go:480] "Attempting to sync node with API server" Apr 13 23:13:41.661713 kubelet[2491]: I0413 23:13:41.661188 2491 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:13:41.661713 kubelet[2491]: I0413 23:13:41.661428 2491 kubelet.go:386] "Adding apiserver pod source" Apr 13 23:13:41.661713 kubelet[2491]: I0413 23:13:41.661497 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:13:41.674517 kubelet[2491]: E0413 23:13:41.670601 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:13:41.680115 kubelet[2491]: I0413 23:13:41.677620 2491 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:13:41.680115 kubelet[2491]: E0413 23:13:41.679087 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:13:41.681712 kubelet[2491]: I0413 23:13:41.681633 2491 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:13:41.725635 kubelet[2491]: W0413 23:13:41.721666 2491 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 23:13:41.866084 kubelet[2491]: E0413 23:13:41.865230 2491 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:13:42.015824 kubelet[2491]: I0413 23:13:42.002358 2491 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 23:13:42.015824 kubelet[2491]: I0413 23:13:42.013894 2491 server.go:1289] "Started kubelet" Apr 13 23:13:42.035826 kubelet[2491]: I0413 23:13:42.017162 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:13:42.047560 kubelet[2491]: I0413 23:13:42.032807 2491 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:13:42.075229 kubelet[2491]: E0413 23:13:42.059811 2491 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60d91977f41ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,LastTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:13:42.108669 kubelet[2491]: I0413 23:13:42.103979 2491 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:13:42.145065 kubelet[2491]: I0413 23:13:42.143776 2491 server.go:317] "Adding debug handlers to kubelet server" Apr 13 23:13:42.145065 kubelet[2491]: I0413 23:13:42.144638 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:13:42.145065 kubelet[2491]: I0413 23:13:42.145013 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:13:42.146678 kubelet[2491]: E0413 23:13:42.145773 2491 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:13:42.146678 kubelet[2491]: E0413 23:13:42.145842 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.146678 kubelet[2491]: I0413 23:13:42.145867 2491 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 23:13:42.150095 kubelet[2491]: E0413 23:13:42.146830 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:13:42.153336 kubelet[2491]: I0413 23:13:42.153190 2491 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 23:13:42.154001 kubelet[2491]: I0413 23:13:42.153876 2491 reconciler.go:26] "Reconciler: start to sync state" Apr 13 23:13:42.154743 kubelet[2491]: E0413 23:13:42.154692 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Apr 13 23:13:42.184285 kubelet[2491]: I0413 23:13:42.183683 2491 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:13:42.185888 kubelet[2491]: I0413 23:13:42.184801 2491 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:13:42.224669 kubelet[2491]: I0413 23:13:42.220565 2491 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:13:42.253171 kubelet[2491]: E0413 23:13:42.247597 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.400604 kubelet[2491]: E0413 23:13:42.388889 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.442477 kubelet[2491]: E0413 23:13:42.401954 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Apr 13 23:13:42.494846 kubelet[2491]: E0413 23:13:42.493955 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.611195 kubelet[2491]: I0413 23:13:42.604403 2491 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:13:42.611195 kubelet[2491]: I0413 23:13:42.604563 2491 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:13:42.611195 kubelet[2491]: E0413 23:13:42.604631 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.611195 kubelet[2491]: I0413 23:13:42.609595 2491 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:13:42.676742 kubelet[2491]: I0413 23:13:42.647779 2491 policy_none.go:49] "None policy: Start" Apr 13 23:13:42.676742 kubelet[2491]: I0413 23:13:42.648498 2491 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 23:13:42.676742 kubelet[2491]: I0413 23:13:42.653952 2491 state_mem.go:35] "Initializing new in-memory state store" Apr 13 23:13:42.743626 kubelet[2491]: E0413 23:13:42.705784 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.812765 kubelet[2491]: E0413 23:13:42.807840 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.847702 kubelet[2491]: E0413 23:13:42.845302 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:13:42.847702 kubelet[2491]: E0413 23:13:42.845322 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Apr 13 23:13:42.869083 kubelet[2491]: I0413 23:13:42.868299 2491 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 23:13:42.890487 kubelet[2491]: I0413 23:13:42.883845 2491 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 23:13:42.890487 kubelet[2491]: I0413 23:13:42.884687 2491 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 23:13:42.890487 kubelet[2491]: I0413 23:13:42.884881 2491 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:13:42.890487 kubelet[2491]: I0413 23:13:42.884896 2491 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 23:13:42.913270 kubelet[2491]: E0413 23:13:42.910793 2491 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:13:42.918732 kubelet[2491]: E0413 23:13:42.913400 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:42.966001 kubelet[2491]: E0413 23:13:42.942749 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:13:43.033385 kubelet[2491]: E0413 23:13:43.032825 2491 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:13:43.041077 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 23:13:43.048678 kubelet[2491]: E0413 23:13:43.032782 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.172004 kubelet[2491]: E0413 23:13:43.162428 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.174505 kubelet[2491]: E0413 23:13:43.171605 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:13:43.307142 kubelet[2491]: E0413 23:13:43.289621 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.318469 kubelet[2491]: E0413 23:13:43.290155 2491 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:13:43.320666 kubelet[2491]: E0413 23:13:43.316808 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:13:43.430986 kubelet[2491]: E0413 23:13:43.426844 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.544627 kubelet[2491]: E0413 23:13:43.544107 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.597160 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 23:13:43.692861 kubelet[2491]: E0413 23:13:43.692297 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.721688 kubelet[2491]: E0413 23:13:43.701104 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" Apr 13 23:13:43.733300 kubelet[2491]: E0413 23:13:43.721489 2491 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:13:43.805496 kubelet[2491]: E0413 23:13:43.804802 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:43.923896 kubelet[2491]: E0413 23:13:43.922856 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:44.035584 kubelet[2491]: E0413 23:13:44.035004 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:13:44.046913 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 23:13:44.070429 kubelet[2491]: E0413 23:13:44.070097 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:13:44.090763 kubelet[2491]: E0413 23:13:44.090596 2491 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:13:44.117271 kubelet[2491]: I0413 23:13:44.116447 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:13:44.117271 kubelet[2491]: I0413 23:13:44.117228 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:13:44.119991 kubelet[2491]: I0413 23:13:44.119816 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:13:44.142705 kubelet[2491]: E0413 23:13:44.142335 2491 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:13:44.142705 kubelet[2491]: E0413 23:13:44.142745 2491 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:13:44.303529 kubelet[2491]: I0413 23:13:44.288481 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:44.313422 kubelet[2491]: E0413 23:13:44.313297 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:44.567562 kubelet[2491]: I0413 23:13:44.565802 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64770316b05f1e39d4310400b358c3ab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"64770316b05f1e39d4310400b358c3ab\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:13:44.567562 kubelet[2491]: I0413 23:13:44.566024 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64770316b05f1e39d4310400b358c3ab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"64770316b05f1e39d4310400b358c3ab\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:13:44.569096 kubelet[2491]: I0413 23:13:44.568014 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:44.569096 kubelet[2491]: E0413 23:13:44.568802 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:44.671366 kubelet[2491]: I0413 23:13:44.670342 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64770316b05f1e39d4310400b358c3ab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"64770316b05f1e39d4310400b358c3ab\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:13:44.671366 kubelet[2491]: I0413 23:13:44.670555 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:13:44.671366 kubelet[2491]: I0413 23:13:44.670581 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:13:44.671366 kubelet[2491]: I0413 23:13:44.670607 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:13:44.671366 kubelet[2491]: I0413 23:13:44.670629 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:13:44.696939 kubelet[2491]: I0413 23:13:44.670652 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:13:44.735678 systemd[1]: Created slice kubepods-burstable-pod64770316b05f1e39d4310400b358c3ab.slice - libcontainer container kubepods-burstable-pod64770316b05f1e39d4310400b358c3ab.slice. Apr 13 23:13:44.787630 kubelet[2491]: I0413 23:13:44.784761 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:13:45.021435 kubelet[2491]: E0413 23:13:45.021235 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:13:45.023035 kubelet[2491]: E0413 23:13:45.022562 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:45.042839 containerd[1482]: time="2026-04-13T23:13:45.031073446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:64770316b05f1e39d4310400b358c3ab,Namespace:kube-system,Attempt:0,}" Apr 13 23:13:45.061375 kubelet[2491]: I0413 23:13:45.046314 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:45.061375 kubelet[2491]: E0413 23:13:45.047766 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:45.314369 kubelet[2491]: E0413 23:13:45.282410 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:13:45.314182 systemd[1]: Created slice kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice - libcontainer container kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice. Apr 13 23:13:45.324205 kubelet[2491]: E0413 23:13:45.316595 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="3.2s" Apr 13 23:13:45.409813 kubelet[2491]: E0413 23:13:45.409521 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:13:45.411721 kubelet[2491]: E0413 23:13:45.411647 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:45.438622 containerd[1482]: time="2026-04-13T23:13:45.423584827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 13 23:13:45.742615 systemd[1]: Created slice kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice - libcontainer container kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice. Apr 13 23:13:45.853210 kubelet[2491]: E0413 23:13:45.851039 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:13:45.887546 kubelet[2491]: E0413 23:13:45.887208 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:45.925280 kubelet[2491]: E0413 23:13:45.924556 2491 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:13:45.929320 kubelet[2491]: I0413 23:13:45.927299 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:45.929320 kubelet[2491]: E0413 23:13:45.929491 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:45.930074 containerd[1482]: time="2026-04-13T23:13:45.924165826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 13 23:13:46.077039 kubelet[2491]: E0413 23:13:46.075703 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:13:46.344630 kubelet[2491]: E0413 23:13:46.343406 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:13:46.466270 kubelet[2491]: E0413 23:13:46.465778 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:13:47.801491 kubelet[2491]: I0413 23:13:47.797853 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:47.801491 kubelet[2491]: E0413 23:13:47.801207 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:48.604346 kubelet[2491]: E0413 23:13:48.602266 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="6.4s" Apr 13 23:13:49.080373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92102501.mount: Deactivated successfully. Apr 13 23:13:49.404299 containerd[1482]: time="2026-04-13T23:13:49.381936961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:13:49.404299 containerd[1482]: time="2026-04-13T23:13:49.404081687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:13:49.430886 containerd[1482]: time="2026-04-13T23:13:49.407799560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 13 23:13:49.436828 containerd[1482]: time="2026-04-13T23:13:49.432669431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:13:49.436828 containerd[1482]: time="2026-04-13T23:13:49.432797825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:13:49.462301 containerd[1482]: time="2026-04-13T23:13:49.457378774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:13:49.477985 containerd[1482]: time="2026-04-13T23:13:49.476742338Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:13:49.835619 containerd[1482]: time="2026-04-13T23:13:49.833364959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:13:49.835619 containerd[1482]: time="2026-04-13T23:13:49.836405421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.788908021s" Apr 13 23:13:49.840713 containerd[1482]: time="2026-04-13T23:13:49.840625061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.913959371s" Apr 13 23:13:49.842019 containerd[1482]: time="2026-04-13T23:13:49.841957815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.418015765s" Apr 13 23:13:51.189093 kubelet[2491]: E0413 23:13:51.188460 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:13:51.287956 kubelet[2491]: E0413 23:13:51.283172 2491 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60d91977f41ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,LastTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:13:51.363539 kubelet[2491]: I0413 23:13:51.362773 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:51.380177 kubelet[2491]: E0413 23:13:51.379182 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.377734849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.378553726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.378573037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.378760796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.375433090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.376430340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.376449446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:13:51.380465 containerd[1482]: time="2026-04-13T23:13:51.376703188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:13:51.519258 containerd[1482]: time="2026-04-13T23:13:51.505217362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:13:51.519258 containerd[1482]: time="2026-04-13T23:13:51.514411916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:13:51.519258 containerd[1482]: time="2026-04-13T23:13:51.514435776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:13:51.519258 containerd[1482]: time="2026-04-13T23:13:51.514620318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:13:51.903360 kubelet[2491]: E0413 23:13:51.902003 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:13:52.019684 kubelet[2491]: E0413 23:13:52.019359 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:13:52.217245 kubelet[2491]: E0413 23:13:52.214399 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:13:52.540368 systemd[1]: Started cri-containerd-7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087.scope - libcontainer container 7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087. Apr 13 23:13:52.619892 systemd[1]: Started cri-containerd-c8ce4f7d3f56fbb2b5c672922e1dc5b6ef9c1fadf352080f5d60cbf13532d802.scope - libcontainer container c8ce4f7d3f56fbb2b5c672922e1dc5b6ef9c1fadf352080f5d60cbf13532d802. Apr 13 23:13:52.818649 systemd[1]: Started cri-containerd-e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31.scope - libcontainer container e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31. Apr 13 23:13:53.938784 containerd[1482]: time="2026-04-13T23:13:53.937518386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:64770316b05f1e39d4310400b358c3ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8ce4f7d3f56fbb2b5c672922e1dc5b6ef9c1fadf352080f5d60cbf13532d802\"" Apr 13 23:13:53.958163 containerd[1482]: time="2026-04-13T23:13:53.957801451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\"" Apr 13 23:13:53.987392 kubelet[2491]: E0413 23:13:53.986796 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:54.041567 kubelet[2491]: E0413 23:13:54.040756 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:54.092730 containerd[1482]: time="2026-04-13T23:13:54.088830038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31\"" Apr 13 23:13:54.167238 kubelet[2491]: E0413 23:13:54.159990 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:54.167238 kubelet[2491]: E0413 23:13:54.166333 2491 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:13:54.189854 kubelet[2491]: E0413 23:13:54.188470 2491 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:13:54.215042 containerd[1482]: time="2026-04-13T23:13:54.213012744Z" level=info msg="CreateContainer within sandbox \"c8ce4f7d3f56fbb2b5c672922e1dc5b6ef9c1fadf352080f5d60cbf13532d802\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 23:13:54.231364 containerd[1482]: time="2026-04-13T23:13:54.227042921Z" level=info msg="CreateContainer within sandbox \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 23:13:54.330178 containerd[1482]: time="2026-04-13T23:13:54.322550409Z" level=info msg="CreateContainer within sandbox \"e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 23:13:54.571322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548468163.mount: Deactivated successfully. Apr 13 23:13:54.709253 containerd[1482]: time="2026-04-13T23:13:54.706968348Z" level=info msg="CreateContainer within sandbox \"c8ce4f7d3f56fbb2b5c672922e1dc5b6ef9c1fadf352080f5d60cbf13532d802\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e327ddf1f53b4fffe554bf932d46621cfe253dfd866061662ee454f24860d885\"" Apr 13 23:13:54.772545 containerd[1482]: time="2026-04-13T23:13:54.772329158Z" level=info msg="StartContainer for \"e327ddf1f53b4fffe554bf932d46621cfe253dfd866061662ee454f24860d885\"" Apr 13 23:13:54.905649 containerd[1482]: time="2026-04-13T23:13:54.842434493Z" level=info msg="CreateContainer within sandbox \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\"" Apr 13 23:13:54.950929 containerd[1482]: time="2026-04-13T23:13:54.950596428Z" level=info msg="StartContainer for \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\"" Apr 13 23:13:55.000789 containerd[1482]: time="2026-04-13T23:13:54.999806489Z" level=info msg="CreateContainer within sandbox \"e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\"" Apr 13 23:13:55.080523 containerd[1482]: time="2026-04-13T23:13:55.076728705Z" level=info msg="StartContainer for \"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\"" Apr 13 23:13:55.092613 kubelet[2491]: E0413 23:13:55.090928 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="7s" Apr 13 23:13:56.136410 systemd[1]: Started cri-containerd-3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff.scope - libcontainer container 3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff. Apr 13 23:13:56.364158 systemd[1]: Started cri-containerd-b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63.scope - libcontainer container b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63. Apr 13 23:13:56.542725 systemd[1]: Started cri-containerd-e327ddf1f53b4fffe554bf932d46621cfe253dfd866061662ee454f24860d885.scope - libcontainer container e327ddf1f53b4fffe554bf932d46621cfe253dfd866061662ee454f24860d885. Apr 13 23:13:57.636222 containerd[1482]: time="2026-04-13T23:13:57.634372903Z" level=info msg="StartContainer for \"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" returns successfully" Apr 13 23:13:57.672540 containerd[1482]: time="2026-04-13T23:13:57.634484819Z" level=info msg="StartContainer for \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\" returns successfully" Apr 13 23:13:58.187536 kubelet[2491]: I0413 23:13:58.185303 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:13:58.187536 kubelet[2491]: E0413 23:13:58.188295 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Apr 13 23:13:58.327057 containerd[1482]: time="2026-04-13T23:13:58.186869719Z" level=info msg="StartContainer for \"e327ddf1f53b4fffe554bf932d46621cfe253dfd866061662ee454f24860d885\" returns successfully" Apr 13 23:13:58.696895 kubelet[2491]: E0413 23:13:58.685723 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:13:58.702983 kubelet[2491]: E0413 23:13:58.698113 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:13:58.816256 kubelet[2491]: E0413 23:13:58.815644 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:13:58.816256 kubelet[2491]: E0413 23:13:58.816294 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:13:58.868731 kubelet[2491]: E0413 23:13:58.816986 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:00.854491 kubelet[2491]: E0413 23:14:00.848292 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:01.024387 kubelet[2491]: E0413 23:14:01.020411 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:01.024387 kubelet[2491]: E0413 23:14:01.022779 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:01.050882 kubelet[2491]: E0413 23:14:01.050721 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:01.052547 kubelet[2491]: E0413 23:14:01.051308 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:01.052638 kubelet[2491]: E0413 23:14:01.051505 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:01.341248 kubelet[2491]: E0413 23:14:01.340759 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:01.343270 kubelet[2491]: E0413 23:14:01.342872 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:01.343270 kubelet[2491]: E0413 23:14:01.341392 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:01.343461 kubelet[2491]: E0413 23:14:01.343314 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:02.546032 kubelet[2491]: E0413 23:14:02.531535 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:02.582105 kubelet[2491]: E0413 23:14:02.564817 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:03.045050 kubelet[2491]: E0413 23:14:03.024860 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:03.115735 kubelet[2491]: E0413 23:14:03.109672 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:04.246764 kubelet[2491]: E0413 23:14:04.245306 2491 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:14:05.646049 kubelet[2491]: I0413 23:14:05.641447 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:14:08.956865 kubelet[2491]: E0413 23:14:08.951671 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:08.975959 kubelet[2491]: E0413 23:14:08.970413 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:10.196628 kubelet[2491]: E0413 23:14:10.194752 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:14:11.338370 kubelet[2491]: E0413 23:14:11.316665 2491 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60d91977f41ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,LastTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:14:12.297494 kubelet[2491]: E0413 23:14:12.284981 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:14:12.470224 kubelet[2491]: E0413 23:14:12.461736 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:14:12.945296 kubelet[2491]: E0413 23:14:12.944567 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:12.984105 kubelet[2491]: E0413 23:14:12.976546 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:14.257663 kubelet[2491]: E0413 23:14:14.256683 2491 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:14:14.290811 kubelet[2491]: E0413 23:14:14.290413 2491 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:14:15.721766 kubelet[2491]: E0413 23:14:15.719843 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:14:21.342479 kubelet[2491]: E0413 23:14:21.329088 2491 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:14:21.356878 kubelet[2491]: E0413 23:14:21.353987 2491 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:14:23.046441 kubelet[2491]: I0413 23:14:23.044331 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:14:23.334491 kubelet[2491]: E0413 23:14:23.318492 2491 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:14:23.334491 kubelet[2491]: E0413 23:14:23.328859 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:23.992802 kubelet[2491]: E0413 23:14:23.991944 2491 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 13 23:14:24.100726 kubelet[2491]: I0413 23:14:24.097061 2491 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:14:24.100726 kubelet[2491]: E0413 23:14:24.101405 2491 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 13 23:14:24.116721 kubelet[2491]: E0413 23:14:24.108015 2491 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60d91977f41ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,LastTimestamp:2026-04-13 23:13:42.007357868 +0000 UTC m=+16.427256146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:14:24.362010 kubelet[2491]: E0413 23:14:24.360873 2491 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:14:24.363100 kubelet[2491]: E0413 23:14:24.362993 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:24.479478 kubelet[2491]: E0413 23:14:24.476287 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:24.608416 kubelet[2491]: E0413 23:14:24.588734 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:24.723777 kubelet[2491]: E0413 23:14:24.719890 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:24.831041 kubelet[2491]: E0413 23:14:24.830081 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:24.934863 kubelet[2491]: E0413 23:14:24.934048 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.073902 kubelet[2491]: E0413 23:14:25.042354 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.225735 kubelet[2491]: E0413 23:14:25.189761 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.295689 kubelet[2491]: E0413 23:14:25.294921 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.426023 kubelet[2491]: E0413 23:14:25.397556 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.552434 kubelet[2491]: E0413 23:14:25.537544 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.722559 kubelet[2491]: E0413 23:14:25.708358 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:25.857296 kubelet[2491]: E0413 23:14:25.832847 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.006436 kubelet[2491]: E0413 23:14:25.976408 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.097373 kubelet[2491]: E0413 23:14:26.095911 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.226833 kubelet[2491]: E0413 23:14:26.221250 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.396038 kubelet[2491]: E0413 23:14:26.332906 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.526808 kubelet[2491]: E0413 23:14:26.521519 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.695096 kubelet[2491]: E0413 23:14:26.659571 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.798428 kubelet[2491]: E0413 23:14:26.797469 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:26.905582 kubelet[2491]: E0413 23:14:26.904860 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.147858 kubelet[2491]: E0413 23:14:27.060467 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.219029 kubelet[2491]: E0413 23:14:27.203210 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.350510 kubelet[2491]: E0413 23:14:27.344760 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.473423 kubelet[2491]: E0413 23:14:27.472203 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.581920 kubelet[2491]: E0413 23:14:27.580901 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.714910 kubelet[2491]: E0413 23:14:27.700976 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.880082 kubelet[2491]: E0413 23:14:27.864533 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:27.991728 kubelet[2491]: E0413 23:14:27.986993 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.107556 kubelet[2491]: E0413 23:14:28.097043 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.241643 kubelet[2491]: E0413 23:14:28.216362 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.422980 kubelet[2491]: E0413 23:14:28.422033 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.546277 kubelet[2491]: E0413 23:14:28.526694 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.726479 kubelet[2491]: E0413 23:14:28.699184 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.859818 kubelet[2491]: E0413 23:14:28.850903 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:28.973066 kubelet[2491]: E0413 23:14:28.969499 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:29.249779 kubelet[2491]: E0413 23:14:29.179035 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:29.430199 kubelet[2491]: E0413 23:14:29.411514 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:29.805313 kubelet[2491]: E0413 23:14:29.561755 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:29.932956 kubelet[2491]: E0413 23:14:29.932575 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:14:30.277549 kubelet[2491]: I0413 23:14:30.271098 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:14:31.694023 kubelet[2491]: I0413 23:14:31.683284 2491 apiserver.go:52] "Watching apiserver" Apr 13 23:14:32.421920 kubelet[2491]: I0413 23:14:32.417714 2491 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 23:14:33.558576 kubelet[2491]: I0413 23:14:33.550800 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:14:34.225406 kubelet[2491]: I0413 23:14:34.220609 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:14:34.839700 kubelet[2491]: E0413 23:14:34.839246 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.729s" Apr 13 23:14:34.982504 kubelet[2491]: E0413 23:14:34.982208 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:34.982504 kubelet[2491]: E0413 23:14:34.982603 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:35.036107 kubelet[2491]: E0413 23:14:35.036030 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:14:37.234584 kubelet[2491]: E0413 23:14:37.222552 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.215s" Apr 13 23:14:48.187947 kubelet[2491]: E0413 23:14:48.181875 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.95s" Apr 13 23:14:48.866959 kubelet[2491]: I0413 23:14:48.862916 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=15.862838628 podStartE2EDuration="15.862838628s" podCreationTimestamp="2026-04-13 23:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:14:48.862390788 +0000 UTC m=+83.282288990" watchObservedRunningTime="2026-04-13 23:14:48.862838628 +0000 UTC m=+83.282736834" Apr 13 23:14:50.774634 kubelet[2491]: I0413 23:14:50.769656 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=16.769540351 podStartE2EDuration="16.769540351s" podCreationTimestamp="2026-04-13 23:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:14:49.734655078 +0000 UTC m=+84.154553273" watchObservedRunningTime="2026-04-13 23:14:50.769540351 +0000 UTC m=+85.189438577" Apr 13 23:15:00.305334 kubelet[2491]: E0413 23:15:00.285964 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.381s" Apr 13 23:15:04.204373 kubelet[2491]: E0413 23:15:04.120075 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.221s" Apr 13 23:15:09.157021 systemd[1]: cri-containerd-b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63.scope: Deactivated successfully. Apr 13 23:15:09.161082 systemd[1]: cri-containerd-b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63.scope: Consumed 4.579s CPU time. Apr 13 23:15:11.411274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63-rootfs.mount: Deactivated successfully. Apr 13 23:15:11.468688 containerd[1482]: time="2026-04-13T23:15:11.460842236Z" level=info msg="shim disconnected" id=b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63 namespace=k8s.io Apr 13 23:15:11.531969 containerd[1482]: time="2026-04-13T23:15:11.503308213Z" level=warning msg="cleaning up after shim disconnected" id=b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63 namespace=k8s.io Apr 13 23:15:11.531969 containerd[1482]: time="2026-04-13T23:15:11.527306246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:15:12.706457 containerd[1482]: time="2026-04-13T23:15:12.696362492Z" level=warning msg="cleanup warnings time=\"2026-04-13T23:15:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 23:15:14.586779 kubelet[2491]: I0413 23:15:14.581414 2491 scope.go:117] "RemoveContainer" containerID="b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63" Apr 13 23:15:14.586779 kubelet[2491]: E0413 23:15:14.586084 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:14.904358 kubelet[2491]: I0413 23:15:14.902484 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=42.892879944 podStartE2EDuration="42.892879944s" podCreationTimestamp="2026-04-13 23:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:14:50.767687473 +0000 UTC m=+85.187585681" watchObservedRunningTime="2026-04-13 23:15:14.892879944 +0000 UTC m=+109.312778158" Apr 13 23:15:15.037937 containerd[1482]: time="2026-04-13T23:15:15.036860216Z" level=info msg="CreateContainer within sandbox \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 23:15:16.479959 containerd[1482]: time="2026-04-13T23:15:16.470985725Z" level=info msg="CreateContainer within sandbox \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\"" Apr 13 23:15:16.987934 containerd[1482]: time="2026-04-13T23:15:16.987515357Z" level=info msg="StartContainer for \"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\"" Apr 13 23:15:19.028453 systemd[1]: run-containerd-runc-k8s.io-c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03-runc.tmphJb.mount: Deactivated successfully. Apr 13 23:15:19.409432 systemd[1]: Started cri-containerd-c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03.scope - libcontainer container c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03. Apr 13 23:15:20.838186 containerd[1482]: time="2026-04-13T23:15:20.822870473Z" level=info msg="StartContainer for \"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" returns successfully" Apr 13 23:15:21.860470 kubelet[2491]: E0413 23:15:21.859894 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:23.131661 kubelet[2491]: E0413 23:15:23.122970 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:23.726047 systemd[1]: Reloading requested from client PID 2869 ('systemctl') (unit session-7.scope)... Apr 13 23:15:23.726102 systemd[1]: Reloading... Apr 13 23:15:24.241165 kubelet[2491]: E0413 23:15:24.235717 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:24.426659 kubelet[2491]: E0413 23:15:24.421379 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.379s" Apr 13 23:15:24.864531 kubelet[2491]: E0413 23:15:24.857411 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:24.864531 kubelet[2491]: E0413 23:15:24.857655 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:27.487729 zram_generator::config[2908]: No configuration found. Apr 13 23:15:30.070853 kubelet[2491]: E0413 23:15:30.059826 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:33.099685 kubelet[2491]: E0413 23:15:33.092781 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:15:35.560540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:15:40.236099 systemd[1]: Reloading finished in 16477 ms. Apr 13 23:15:42.549340 kubelet[2491]: E0413 23:15:42.547784 2491 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.626s" Apr 13 23:15:42.785746 kubelet[2491]: E0413 23:15:42.782044 2491 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 13 23:15:43.156813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:15:43.554707 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:15:43.657844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:15:43.688799 systemd[1]: kubelet.service: Consumed 1min 16.113s CPU time, 145.9M memory peak, 0B memory swap peak. Apr 13 23:15:44.306305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:15:53.307777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:15:53.405306 (kubelet)[2954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:15:56.263804 kubelet[2954]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:15:56.263804 kubelet[2954]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:15:56.263804 kubelet[2954]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:15:56.322271 kubelet[2954]: I0413 23:15:56.260709 2954 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:15:56.848626 kubelet[2954]: I0413 23:15:56.845229 2954 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 23:15:56.848626 kubelet[2954]: I0413 23:15:56.846718 2954 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:15:56.874493 kubelet[2954]: I0413 23:15:56.849109 2954 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:15:57.081569 kubelet[2954]: I0413 23:15:57.077649 2954 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 23:15:57.516048 kubelet[2954]: I0413 23:15:57.450715 2954 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:15:58.561154 kubelet[2954]: E0413 23:15:58.545044 2954 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:15:58.630245 kubelet[2954]: I0413 23:15:58.565780 2954 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 23:15:59.241603 kubelet[2954]: I0413 23:15:59.240703 2954 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 23:15:59.243028 kubelet[2954]: I0413 23:15:59.242746 2954 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:15:59.243236 kubelet[2954]: I0413 23:15:59.242801 2954 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:15:59.243236 kubelet[2954]: I0413 23:15:59.243225 2954 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:15:59.243236 kubelet[2954]: I0413 23:15:59.243238 2954 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 23:15:59.243620 kubelet[2954]: I0413 23:15:59.243585 2954 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:15:59.262987 kubelet[2954]: I0413 23:15:59.261187 2954 kubelet.go:480] "Attempting to sync node with API server" Apr 13 23:15:59.262987 kubelet[2954]: I0413 23:15:59.261931 2954 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:15:59.262987 kubelet[2954]: I0413 23:15:59.262353 2954 kubelet.go:386] "Adding apiserver pod source" Apr 13 23:15:59.262987 kubelet[2954]: I0413 23:15:59.262529 2954 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:15:59.433968 kubelet[2954]: I0413 23:15:59.429884 2954 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:15:59.510179 kubelet[2954]: I0413 23:15:59.507955 2954 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:15:59.975486 kubelet[2954]: I0413 23:15:59.971947 2954 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 23:15:59.975486 kubelet[2954]: I0413 23:15:59.974022 2954 server.go:1289] "Started kubelet" Apr 13 23:16:00.328710 kubelet[2954]: I0413 23:15:59.986889 2954 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:16:00.328710 kubelet[2954]: I0413 23:16:00.002852 2954 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:16:00.328710 kubelet[2954]: I0413 23:16:00.087598 2954 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:16:00.328710 kubelet[2954]: I0413 23:16:00.116301 2954 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:16:00.328710 kubelet[2954]: I0413 23:16:00.261745 2954 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:16:00.328710 kubelet[2954]: I0413 23:16:00.274620 2954 apiserver.go:52] "Watching apiserver" Apr 13 23:16:00.398012 kubelet[2954]: I0413 23:16:00.392879 2954 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 23:16:00.427908 kubelet[2954]: I0413 23:16:00.426720 2954 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 23:16:00.438631 kubelet[2954]: I0413 23:16:00.429377 2954 server.go:317] "Adding debug handlers to kubelet server" Apr 13 23:16:00.493333 kubelet[2954]: I0413 23:16:00.492957 2954 reconciler.go:26] "Reconciler: start to sync state" Apr 13 23:16:00.514534 kubelet[2954]: I0413 23:16:00.464034 2954 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:16:00.514534 kubelet[2954]: I0413 23:16:00.507543 2954 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:16:00.570054 kubelet[2954]: E0413 23:16:00.569699 2954 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:16:00.598941 kubelet[2954]: I0413 23:16:00.579014 2954 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:16:01.157664 kubelet[2954]: I0413 23:16:01.100932 2954 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 23:16:01.386061 kubelet[2954]: I0413 23:16:01.376038 2954 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 23:16:01.448580 kubelet[2954]: I0413 23:16:01.434948 2954 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 23:16:01.448580 kubelet[2954]: I0413 23:16:01.437921 2954 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:16:01.448580 kubelet[2954]: I0413 23:16:01.437968 2954 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 23:16:01.601896 kubelet[2954]: E0413 23:16:01.584914 2954 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:16:01.726791 kubelet[2954]: E0413 23:16:01.710903 2954 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:16:01.933089 kubelet[2954]: E0413 23:16:01.930892 2954 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:02.368753 kubelet[2954]: E0413 23:16:02.367411 2954 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:03.185487 kubelet[2954]: E0413 23:16:03.183425 2954 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:04.906718 kubelet[2954]: E0413 23:16:04.902141 2954 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:07.520288 kubelet[2954]: I0413 23:16:07.519946 2954 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:16:07.602534 kubelet[2954]: I0413 23:16:07.554254 2954 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:16:07.602534 kubelet[2954]: I0413 23:16:07.586804 2954 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:16:07.615470 kubelet[2954]: I0413 23:16:07.615289 2954 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 23:16:07.615689 kubelet[2954]: I0413 23:16:07.615507 2954 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 23:16:07.615689 kubelet[2954]: I0413 23:16:07.615648 2954 policy_none.go:49] "None policy: Start" Apr 13 23:16:07.615883 kubelet[2954]: I0413 23:16:07.615756 2954 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 23:16:07.615883 kubelet[2954]: I0413 23:16:07.615777 2954 state_mem.go:35] "Initializing new in-memory state store" Apr 13 23:16:07.616544 kubelet[2954]: I0413 23:16:07.616372 2954 state_mem.go:75] "Updated machine memory state" Apr 13 23:16:08.133567 kubelet[2954]: E0413 23:16:08.126668 2954 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:09.181161 kubelet[2954]: E0413 23:16:09.180110 2954 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:16:09.192220 kubelet[2954]: I0413 23:16:09.192190 2954 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:16:09.192660 kubelet[2954]: I0413 23:16:09.192475 2954 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:16:09.193295 kubelet[2954]: I0413 23:16:09.193278 2954 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:16:09.206827 kubelet[2954]: E0413 23:16:09.206720 2954 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:16:09.648156 kubelet[2954]: I0413 23:16:09.635162 2954 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:10.599612 kubelet[2954]: I0413 23:16:10.533917 2954 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 13 23:16:10.740742 kubelet[2954]: I0413 23:16:10.736978 2954 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:16:13.381780 kubelet[2954]: I0413 23:16:13.379209 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64770316b05f1e39d4310400b358c3ab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"64770316b05f1e39d4310400b358c3ab\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:13.381780 kubelet[2954]: I0413 23:16:13.380663 2954 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:13.482619 kubelet[2954]: I0413 23:16:13.482151 2954 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:13.516017 kubelet[2954]: I0413 23:16:13.483922 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64770316b05f1e39d4310400b358c3ab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"64770316b05f1e39d4310400b358c3ab\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:13.516017 kubelet[2954]: I0413 23:16:13.484109 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64770316b05f1e39d4310400b358c3ab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"64770316b05f1e39d4310400b358c3ab\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:13.516017 kubelet[2954]: I0413 23:16:13.515620 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:16:13.568720 kubelet[2954]: I0413 23:16:13.516276 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:13.568720 kubelet[2954]: I0413 23:16:13.516639 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:13.568720 kubelet[2954]: I0413 23:16:13.516727 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:13.568720 kubelet[2954]: I0413 23:16:13.516754 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:13.568720 kubelet[2954]: I0413 23:16:13.516857 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:13.898760 kubelet[2954]: I0413 23:16:13.875922 2954 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 23:16:14.154620 kubelet[2954]: E0413 23:16:14.153184 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:14.469542 kubelet[2954]: E0413 23:16:14.462714 2954 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:14.813625 kubelet[2954]: E0413 23:16:14.533014 2954 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:14.813625 kubelet[2954]: E0413 23:16:14.783309 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.214s" Apr 13 23:16:14.813625 kubelet[2954]: E0413 23:16:14.784201 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:14.813625 kubelet[2954]: E0413 23:16:14.784564 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:15.381425 kubelet[2954]: E0413 23:16:15.381030 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:16.559475 kubelet[2954]: I0413 23:16:16.533767 2954 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 23:16:16.700617 containerd[1482]: time="2026-04-13T23:16:16.544765454Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 23:16:16.718664 kubelet[2954]: I0413 23:16:16.648405 2954 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 23:16:16.817648 kubelet[2954]: E0413 23:16:16.815977 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:16.869903 kubelet[2954]: E0413 23:16:16.818677 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:16.869903 kubelet[2954]: E0413 23:16:16.818109 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:18.177502 kubelet[2954]: I0413 23:16:18.166047 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccf59ff7-17ca-4368-897c-43da8a754313-kube-proxy\") pod \"kube-proxy-bf2rf\" (UID: \"ccf59ff7-17ca-4368-897c-43da8a754313\") " pod="kube-system/kube-proxy-bf2rf" Apr 13 23:16:18.218607 kubelet[2954]: I0413 23:16:18.218432 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkpvq\" (UniqueName: \"kubernetes.io/projected/ccf59ff7-17ca-4368-897c-43da8a754313-kube-api-access-fkpvq\") pod \"kube-proxy-bf2rf\" (UID: \"ccf59ff7-17ca-4368-897c-43da8a754313\") " pod="kube-system/kube-proxy-bf2rf" Apr 13 23:16:18.284650 kubelet[2954]: I0413 23:16:18.218736 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf59ff7-17ca-4368-897c-43da8a754313-xtables-lock\") pod \"kube-proxy-bf2rf\" (UID: \"ccf59ff7-17ca-4368-897c-43da8a754313\") " pod="kube-system/kube-proxy-bf2rf" Apr 13 23:16:18.284650 kubelet[2954]: I0413 23:16:18.218765 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf59ff7-17ca-4368-897c-43da8a754313-lib-modules\") pod \"kube-proxy-bf2rf\" (UID: \"ccf59ff7-17ca-4368-897c-43da8a754313\") " pod="kube-system/kube-proxy-bf2rf" Apr 13 23:16:18.284650 kubelet[2954]: E0413 23:16:18.188014 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:18.311307 kubelet[2954]: E0413 23:16:18.188741 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:19.209892 systemd[1]: Created slice kubepods-besteffort-podccf59ff7_17ca_4368_897c_43da8a754313.slice - libcontainer container kubepods-besteffort-podccf59ff7_17ca_4368_897c_43da8a754313.slice. Apr 13 23:16:20.685903 kubelet[2954]: E0413 23:16:20.682387 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:21.158068 containerd[1482]: time="2026-04-13T23:16:21.139311592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf2rf,Uid:ccf59ff7-17ca-4368-897c-43da8a754313,Namespace:kube-system,Attempt:0,}" Apr 13 23:16:21.518086 kubelet[2954]: E0413 23:16:21.516094 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.867s" Apr 13 23:16:23.258678 containerd[1482]: time="2026-04-13T23:16:23.253208748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:16:23.258678 containerd[1482]: time="2026-04-13T23:16:23.254056383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:16:23.258678 containerd[1482]: time="2026-04-13T23:16:23.254074319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:23.258678 containerd[1482]: time="2026-04-13T23:16:23.255507221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:25.232457 kubelet[2954]: E0413 23:16:25.228565 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.735s" Apr 13 23:16:26.055379 kubelet[2954]: E0413 23:16:26.052807 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:26.497347 systemd[1]: Started cri-containerd-73d075100e0b987947fb1961dad3b9c4bc306cfe93ee28bd2518fb1f956167a3.scope - libcontainer container 73d075100e0b987947fb1961dad3b9c4bc306cfe93ee28bd2518fb1f956167a3. Apr 13 23:16:44.320720 systemd[1]: cri-containerd-3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff.scope: Deactivated successfully. Apr 13 23:16:44.605516 systemd[1]: cri-containerd-3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff.scope: Consumed 17.965s CPU time, 18.7M memory peak, 0B memory swap peak. Apr 13 23:16:45.666248 systemd[1]: cri-containerd-c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03.scope: Deactivated successfully. Apr 13 23:16:45.869486 systemd[1]: cri-containerd-c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03.scope: Consumed 19.532s CPU time, 16.0M memory peak, 0B memory swap peak. Apr 13 23:17:09.966893 containerd[1482]: time="2026-04-13T23:17:09.961681170Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 13 23:17:12.294536 containerd[1482]: time="2026-04-13T23:17:11.977686000Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 13 23:17:13.394567 containerd[1482]: time="2026-04-13T23:17:12.304989103Z" level=error msg="failed to handle container TaskExit event container_id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" pid:2722 exit_status:1 exited_at:{seconds:1776122216 nanos:445916339}" error="failed to stop container: context deadline exceeded: unknown" Apr 13 23:17:13.722183 containerd[1482]: time="2026-04-13T23:17:13.181956349Z" level=error msg="failed to handle container TaskExit event container_id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" pid:2848 exit_status:1 exited_at:{seconds:1776122217 nanos:758805237}" error="failed to stop container: context deadline exceeded: unknown" Apr 13 23:17:14.922653 containerd[1482]: time="2026-04-13T23:17:14.813248555Z" level=info msg="TaskExit event container_id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" pid:2722 exit_status:1 exited_at:{seconds:1776122216 nanos:445916339}" Apr 13 23:17:27.190682 containerd[1482]: time="2026-04-13T23:17:27.177375447Z" level=error msg="Failed to handle backOff event container_id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" pid:2722 exit_status:1 exited_at:{seconds:1776122216 nanos:445916339} for 3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 13 23:17:30.975511 containerd[1482]: time="2026-04-13T23:17:27.298989313Z" level=info msg="TaskExit event container_id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" pid:2848 exit_status:1 exited_at:{seconds:1776122217 nanos:758805237}" Apr 13 23:17:32.273538 containerd[1482]: time="2026-04-13T23:17:30.960759648Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 13 23:17:32.273538 containerd[1482]: time="2026-04-13T23:17:31.998414326Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 13 23:17:35.910146 kubelet[2954]: I0413 23:17:35.899288 2954 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:38.582572 containerd[1482]: time="2026-04-13T23:17:38.576889467Z" level=error msg="Failed to handle backOff event container_id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" pid:2848 exit_status:1 exited_at:{seconds:1776122217 nanos:758805237} for c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 13 23:17:39.018428 containerd[1482]: time="2026-04-13T23:17:38.589633150Z" level=info msg="TaskExit event container_id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" id:\"3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff\" pid:2722 exit_status:1 exited_at:{seconds:1776122216 nanos:445916339}" Apr 13 23:17:41.357843 containerd[1482]: time="2026-04-13T23:17:41.338972380Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 13 23:17:41.357843 containerd[1482]: time="2026-04-13T23:17:41.341610813Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 13 23:17:43.590194 kubelet[2954]: I0413 23:17:41.282841 2954 reflector.go:556] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:44.987851 kubelet[2954]: I0413 23:17:43.576674 2954 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:44.987851 kubelet[2954]: I0413 23:17:43.801494 2954 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:46.662547 kubelet[2954]: I0413 23:17:42.796866 2954 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:46.856175 kubelet[2954]: I0413 23:17:46.670299 2954 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:47.001290 kubelet[2954]: I0413 23:17:45.786103 2954 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 13 23:17:47.029704 kubelet[2954]: E0413 23:17:46.714792 2954 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 13 23:17:47.091177 containerd[1482]: time="2026-04-13T23:17:47.083805586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf2rf,Uid:ccf59ff7-17ca-4368-897c-43da8a754313,Namespace:kube-system,Attempt:0,} returns sandbox id \"73d075100e0b987947fb1961dad3b9c4bc306cfe93ee28bd2518fb1f956167a3\"" Apr 13 23:17:47.510305 kubelet[2954]: E0413 23:17:47.509847 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m22.027s" Apr 13 23:17:47.713225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff-rootfs.mount: Deactivated successfully. Apr 13 23:17:47.736388 kubelet[2954]: E0413 23:17:47.734393 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:47.740008 containerd[1482]: time="2026-04-13T23:17:47.739556927Z" level=info msg="shim disconnected" id=3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff namespace=k8s.io Apr 13 23:17:47.740008 containerd[1482]: time="2026-04-13T23:17:47.739604700Z" level=warning msg="cleaning up after shim disconnected" id=3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff namespace=k8s.io Apr 13 23:17:47.740008 containerd[1482]: time="2026-04-13T23:17:47.739615281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:17:48.143459 kubelet[2954]: E0413 23:17:48.142590 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:48.165328 kubelet[2954]: E0413 23:17:48.160835 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:48.376938 containerd[1482]: time="2026-04-13T23:17:48.376570557Z" level=info msg="CreateContainer within sandbox \"73d075100e0b987947fb1961dad3b9c4bc306cfe93ee28bd2518fb1f956167a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 23:17:48.381150 kubelet[2954]: E0413 23:17:48.381092 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:48.785657 containerd[1482]: time="2026-04-13T23:17:48.764862165Z" level=info msg="TaskExit event container_id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" id:\"c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03\" pid:2848 exit_status:1 exited_at:{seconds:1776122217 nanos:758805237}" Apr 13 23:17:48.827402 containerd[1482]: time="2026-04-13T23:17:48.825984090Z" level=info msg="CreateContainer within sandbox \"73d075100e0b987947fb1961dad3b9c4bc306cfe93ee28bd2518fb1f956167a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30\"" Apr 13 23:17:48.871901 containerd[1482]: time="2026-04-13T23:17:48.867408044Z" level=info msg="StartContainer for \"9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30\"" Apr 13 23:17:52.294197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03-rootfs.mount: Deactivated successfully. Apr 13 23:17:54.543195 containerd[1482]: time="2026-04-13T23:17:54.534050407Z" level=info msg="shim disconnected" id=c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03 namespace=k8s.io Apr 13 23:17:54.986317 containerd[1482]: time="2026-04-13T23:17:54.878616903Z" level=warning msg="cleaning up after shim disconnected" id=c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03 namespace=k8s.io Apr 13 23:17:55.661168 containerd[1482]: time="2026-04-13T23:17:55.356401893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:18:02.664444 containerd[1482]: time="2026-04-13T23:18:01.204113349Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03 delete" error="signal: killed" namespace=k8s.io Apr 13 23:18:03.255472 containerd[1482]: time="2026-04-13T23:18:02.706348468Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03 namespace=k8s.io Apr 13 23:18:05.673822 containerd[1482]: time="2026-04-13T23:18:05.157926928Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03 Apr 13 23:18:20.695042 systemd[1]: run-containerd-runc-k8s.io-9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30-runc.JBpFUE.mount: Deactivated successfully. Apr 13 23:18:28.762255 systemd[1]: Started cri-containerd-9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30.scope - libcontainer container 9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30. Apr 13 23:18:46.689335 kubelet[2954]: E0413 23:18:46.679717 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="57.015s" Apr 13 23:18:47.449724 containerd[1482]: time="2026-04-13T23:18:47.154479290Z" level=error msg="get state for 9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30" error="context deadline exceeded: unknown" Apr 13 23:18:47.698768 containerd[1482]: time="2026-04-13T23:18:47.687926432Z" level=warning msg="unknown status" status=0 Apr 13 23:18:58.197928 kubelet[2954]: I0413 23:18:58.055895 2954 scope.go:117] "RemoveContainer" containerID="b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63" Apr 13 23:18:59.917793 kubelet[2954]: E0413 23:18:58.266750 2954 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 13 23:19:02.052594 containerd[1482]: time="2026-04-13T23:18:58.901970812Z" level=error msg="get state for 9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30" error="context deadline exceeded: unknown" Apr 13 23:19:02.052594 containerd[1482]: time="2026-04-13T23:18:59.741075478Z" level=warning msg="unknown status" status=0 Apr 13 23:19:06.432041 containerd[1482]: time="2026-04-13T23:19:05.723568671Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 13 23:19:08.296876 containerd[1482]: time="2026-04-13T23:19:06.561038291Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 13 23:19:15.285435 kubelet[2954]: E0413 23:19:15.264712 2954 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 23:19:18.328637 containerd[1482]: time="2026-04-13T23:19:18.327851351Z" level=info msg="StartContainer for \"9bd8da70fd51528aadc4c75914660ea2125e0399733252d0a014f8314a086b30\" returns successfully" Apr 13 23:19:18.501797 kubelet[2954]: I0413 23:19:18.500175 2954 scope.go:117] "RemoveContainer" containerID="3f8141515a8028801e8fd7dfb9f2ac96d368c798478f8518fb5be6787fb800ff" Apr 13 23:19:18.501797 kubelet[2954]: E0413 23:19:18.500387 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:18.531353 kubelet[2954]: E0413 23:19:18.529739 2954 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get rootFs stats: failed to get rootFs info: cannot find filesystem info for device \"/dev/vda9\"" Apr 13 23:19:18.629494 containerd[1482]: time="2026-04-13T23:19:18.628830548Z" level=info msg="CreateContainer within sandbox \"e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 23:19:18.711851 kubelet[2954]: E0413 23:19:18.711454 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.992s" Apr 13 23:19:19.005720 containerd[1482]: time="2026-04-13T23:19:19.002991554Z" level=info msg="RemoveContainer for \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\"" Apr 13 23:19:19.817016 containerd[1482]: time="2026-04-13T23:19:19.816710347Z" level=info msg="RemoveContainer for \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\" returns successfully" Apr 13 23:19:19.982935 containerd[1482]: time="2026-04-13T23:19:19.818468127Z" level=error msg="ContainerStatus for \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\": not found" Apr 13 23:19:19.984034 kubelet[2954]: E0413 23:19:19.819497 2954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63\": not found" containerID="b9973f6a264a388509325687ca9264692ebbda66c99510cef355e66a9bcfaf63" Apr 13 23:19:20.004906 kubelet[2954]: E0413 23:19:19.985757 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:20.141811 kubelet[2954]: E0413 23:19:20.139888 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.402s" Apr 13 23:19:20.176186 kubelet[2954]: I0413 23:19:20.171931 2954 scope.go:117] "RemoveContainer" containerID="c0b3414fb92d5fb36481e067db82eff043d22e0bddfdad68c1187f275a6c7b03" Apr 13 23:19:20.212366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562068892.mount: Deactivated successfully. Apr 13 23:19:20.377441 kubelet[2954]: E0413 23:19:20.344250 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:20.447432 containerd[1482]: time="2026-04-13T23:19:20.444646557Z" level=info msg="CreateContainer within sandbox \"e35b2d58488c31d5de4720aa5cd27ec58dd5f051fc623c76d38bf28816044d31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd\"" Apr 13 23:19:20.554901 containerd[1482]: time="2026-04-13T23:19:20.552973729Z" level=info msg="StartContainer for \"ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd\"" Apr 13 23:19:20.595033 kubelet[2954]: E0413 23:19:20.559453 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:19:20.769525 containerd[1482]: time="2026-04-13T23:19:20.768709048Z" level=info msg="CreateContainer within sandbox \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 13 23:19:21.417586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058335395.mount: Deactivated successfully. Apr 13 23:19:21.831736 systemd[1]: Started cri-containerd-ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd.scope - libcontainer container ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd. Apr 13 23:19:21.996319 containerd[1482]: time="2026-04-13T23:19:21.948314045Z" level=info msg="CreateContainer within sandbox \"7d659bd65c52533341be84376c301cf22be03c5f5e0ba46b63f0aa3976844087\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051\"" Apr 13 23:19:22.160473 containerd[1482]: time="2026-04-13T23:19:22.148067306Z" level=info msg="StartContainer for \"e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051\"" Apr 13 23:19:22.499709 kubelet[2954]: E0413 23:19:22.498623 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:22.545220 kubelet[2954]: E0413 23:19:22.545165 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:23.318219 systemd[1]: Started cri-containerd-e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051.scope - libcontainer container e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051. Apr 13 23:19:23.527479 kubelet[2954]: E0413 23:19:23.527086 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:23.752511 containerd[1482]: time="2026-04-13T23:19:23.748600806Z" level=info msg="StartContainer for \"ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd\" returns successfully" Apr 13 23:19:25.064484 kubelet[2954]: E0413 23:19:25.064088 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:25.142583 containerd[1482]: time="2026-04-13T23:19:25.126055910Z" level=info msg="StartContainer for \"e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051\" returns successfully" Apr 13 23:19:25.587745 kubelet[2954]: E0413 23:19:25.585573 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:19:25.601243 kubelet[2954]: I0413 23:19:25.599388 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bf2rf" podStartSLOduration=189.599318716 podStartE2EDuration="3m9.599318716s" podCreationTimestamp="2026-04-13 23:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:19:22.781111427 +0000 UTC m=+209.176235258" watchObservedRunningTime="2026-04-13 23:19:25.599318716 +0000 UTC m=+211.994442547" Apr 13 23:19:27.211670 kubelet[2954]: E0413 23:19:27.209581 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:27.259932 kubelet[2954]: E0413 23:19:27.234958 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:36.781762 kubelet[2954]: E0413 23:19:36.763909 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:19:37.915594 kubelet[2954]: E0413 23:19:37.913440 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:38.449060 kubelet[2954]: E0413 23:19:38.448477 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:38.982227 kubelet[2954]: E0413 23:19:38.979741 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.577s" Apr 13 23:19:39.697219 kubelet[2954]: E0413 23:19:39.696062 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:39.697219 kubelet[2954]: E0413 23:19:39.696653 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:40.906577 kubelet[2954]: E0413 23:19:40.905017 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:45.240980 kubelet[2954]: E0413 23:19:44.322006 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:19:49.991668 kubelet[2954]: E0413 23:19:49.955803 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.661s" Apr 13 23:19:52.334735 kubelet[2954]: E0413 23:19:52.300203 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:19:52.735817 kubelet[2954]: E0413 23:19:52.696251 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Apr 13 23:19:53.197290 kubelet[2954]: E0413 23:19:53.186518 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:54.560748 kubelet[2954]: E0413 23:19:54.557749 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.037s" Apr 13 23:19:56.381843 kubelet[2954]: E0413 23:19:56.364083 2954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:57.558321 kubelet[2954]: E0413 23:19:57.557818 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:20:00.768628 kubelet[2954]: E0413 23:20:00.754411 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.254s" Apr 13 23:20:02.684977 kubelet[2954]: E0413 23:20:02.664306 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:20:07.869632 kubelet[2954]: E0413 23:20:07.855638 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:20:13.540942 kubelet[2954]: E0413 23:20:13.504518 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:20:16.535097 kubelet[2954]: E0413 23:20:16.525589 2954 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Apr 13 23:20:33.074020 systemd[1]: cri-containerd-e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051.scope: Deactivated successfully. Apr 13 23:20:33.176605 systemd[1]: cri-containerd-e90ed1f1654e73990ff40b8569341b7ffe1b227f7c181261b0a8b24f1d18b051.scope: Consumed 20.607s CPU time. Apr 13 23:20:35.305080 kubelet[2954]: E0413 23:20:35.276552 2954 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:20:36.546395 systemd[1]: cri-containerd-ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd.scope: Deactivated successfully. Apr 13 23:20:36.690885 systemd[1]: cri-containerd-ce5d61e61bb3ced5a6f111e14ac511d671134ebf12441f8c05445bc4b80a07fd.scope: Consumed 12.776s CPU time. Apr 13 23:20:45.120781 sudo[1651]: pam_unix(sudo:session): session closed for user root Apr 13 23:20:45.463477 sshd[1648]: pam_unix(sshd:session): session closed for user core