Apr 16 04:16:20.152639 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 04:16:20.152680 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:20.152694 kernel: BIOS-provided physical RAM map: Apr 16 04:16:20.152701 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 04:16:20.152708 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 04:16:20.152715 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 04:16:20.152723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 04:16:20.152729 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 04:16:20.152737 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 04:16:20.152745 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 04:16:20.152752 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 04:16:20.152759 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 04:16:20.152785 kernel: NX (Execute Disable) protection: active Apr 16 04:16:20.152793 kernel: APIC: Static calls initialized Apr 16 04:16:20.152802 kernel: SMBIOS 2.8 present. Apr 16 04:16:20.152826 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 04:16:20.152834 kernel: Hypervisor detected: KVM Apr 16 04:16:20.152843 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 04:16:20.152851 kernel: kvm-clock: using sched offset of 11498621736 cycles Apr 16 04:16:20.152859 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 04:16:20.152867 kernel: tsc: Detected 2793.438 MHz processor Apr 16 04:16:20.152875 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 04:16:20.152885 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 04:16:20.152893 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:16:20.152904 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 04:16:20.152912 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 04:16:20.152920 kernel: Using GB pages for direct mapping Apr 16 04:16:20.152928 kernel: ACPI: Early table checksum verification disabled Apr 16 04:16:20.152936 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 04:16:20.153601 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153661 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153669 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153678 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 04:16:20.153727 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153736 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153743 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153751 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.153759 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 04:16:20.153768 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 04:16:20.153776 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 04:16:20.153789 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 04:16:20.153800 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 04:16:20.153809 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 04:16:20.153818 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 04:16:20.153827 kernel: No NUMA configuration found Apr 16 04:16:20.153836 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 04:16:20.153845 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 16 04:16:20.153855 kernel: Zone ranges: Apr 16 04:16:20.153863 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 04:16:20.153871 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 04:16:20.153879 kernel: Normal empty Apr 16 04:16:20.153887 kernel: Movable zone start for each node Apr 16 04:16:20.153895 kernel: Early memory node ranges Apr 16 04:16:20.153903 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 04:16:20.153911 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 04:16:20.153919 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 04:16:20.153930 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 04:16:20.153938 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 04:16:20.153989 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 04:16:20.153997 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 04:16:20.154005 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 04:16:20.154013 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 04:16:20.154021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 04:16:20.154029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 04:16:20.154037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 04:16:20.154048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 04:16:20.154057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 04:16:20.154065 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 04:16:20.154073 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 04:16:20.154081 kernel: TSC deadline timer available Apr 16 04:16:20.154089 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 04:16:20.154097 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 04:16:20.154105 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 04:16:20.154113 kernel: kvm-guest: setup PV sched yield Apr 16 04:16:20.155008 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 04:16:20.155396 kernel: Booting paravirtualized kernel on KVM Apr 16 04:16:20.155406 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 04:16:20.155416 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 04:16:20.155425 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 04:16:20.155434 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 04:16:20.155443 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 04:16:20.155452 kernel: kvm-guest: PV spinlocks enabled Apr 16 04:16:20.155461 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 04:16:20.155472 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:20.155485 kernel: random: crng init done Apr 16 04:16:20.155493 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 04:16:20.155502 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 04:16:20.155510 kernel: Fallback order for Node 0: 0 Apr 16 04:16:20.155519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 16 04:16:20.155528 kernel: Policy zone: DMA32 Apr 16 04:16:20.155536 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 04:16:20.155545 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 16 04:16:20.155556 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 04:16:20.155565 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 04:16:20.155573 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 04:16:20.155582 kernel: Dynamic Preempt: voluntary Apr 16 04:16:20.155590 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 04:16:20.155600 kernel: rcu: RCU event tracing is enabled. Apr 16 04:16:20.155608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 04:16:20.155617 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 04:16:20.155625 kernel: Rude variant of Tasks RCU enabled. Apr 16 04:16:20.155637 kernel: Tracing variant of Tasks RCU enabled. Apr 16 04:16:20.155646 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 04:16:20.155654 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 04:16:20.155662 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 04:16:20.155689 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 04:16:20.155697 kernel: Console: colour VGA+ 80x25 Apr 16 04:16:20.155705 kernel: printk: console [ttyS0] enabled Apr 16 04:16:20.155714 kernel: ACPI: Core revision 20230628 Apr 16 04:16:20.155723 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 04:16:20.155733 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 04:16:20.155741 kernel: x2apic enabled Apr 16 04:16:20.155749 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 04:16:20.155757 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 04:16:20.155765 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 04:16:20.155772 kernel: kvm-guest: setup PV IPIs Apr 16 04:16:20.155780 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 04:16:20.155789 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:16:20.155808 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 04:16:20.155817 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 04:16:20.155827 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 04:16:20.155838 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 04:16:20.155847 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 04:16:20.155856 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 04:16:20.155865 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 04:16:20.155875 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 04:16:20.155886 kernel: RETBleed: Vulnerable Apr 16 04:16:20.155895 kernel: Speculative Store Bypass: Vulnerable Apr 16 04:16:20.155905 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 04:16:20.156733 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 04:16:20.156796 kernel: active return thunk: its_return_thunk Apr 16 04:16:20.156806 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 04:16:20.156816 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 04:16:20.156860 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 04:16:20.156870 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 04:16:20.156886 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 04:16:20.156896 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 04:16:20.156905 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 04:16:20.156915 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 04:16:20.156924 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 04:16:20.156934 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 04:16:20.157480 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 04:16:20.157533 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 04:16:20.157542 kernel: Freeing SMP alternatives memory: 32K Apr 16 04:16:20.157563 kernel: pid_max: default: 32768 minimum: 301 Apr 16 04:16:20.157572 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 04:16:20.157581 kernel: landlock: Up and running. Apr 16 04:16:20.157611 kernel: SELinux: Initializing. Apr 16 04:16:20.157621 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.157630 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.157640 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 04:16:20.157678 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:20.157739 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:20.157766 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:20.157776 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 04:16:20.157786 kernel: signal: max sigframe size: 3632 Apr 16 04:16:20.157795 kernel: rcu: Hierarchical SRCU implementation. Apr 16 04:16:20.157831 kernel: rcu: Max phase no-delay instances is 400. Apr 16 04:16:20.157854 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 04:16:20.157876 kernel: smp: Bringing up secondary CPUs ... Apr 16 04:16:20.157898 kernel: smpboot: x86: Booting SMP configuration: Apr 16 04:16:20.157909 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 04:16:20.157921 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 04:16:20.157929 kernel: smpboot: Max logical packages: 1 Apr 16 04:16:20.157937 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 04:16:20.157967 kernel: devtmpfs: initialized Apr 16 04:16:20.157975 kernel: x86/mm: Memory block size: 128MB Apr 16 04:16:20.157984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 04:16:20.157993 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.158001 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 04:16:20.158010 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 04:16:20.158023 kernel: audit: initializing netlink subsys (disabled) Apr 16 04:16:20.158032 kernel: audit: type=2000 audit(1776312971.168:1): state=initialized audit_enabled=0 res=1 Apr 16 04:16:20.158042 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 04:16:20.158051 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 04:16:20.158061 kernel: cpuidle: using governor menu Apr 16 04:16:20.158070 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 04:16:20.158080 kernel: dca service started, version 1.12.1 Apr 16 04:16:20.158089 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 04:16:20.158099 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 04:16:20.158111 kernel: PCI: Using configuration type 1 for base access Apr 16 04:16:20.158120 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 04:16:20.158130 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 04:16:20.158139 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 04:16:20.158148 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 04:16:20.158158 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 04:16:20.158168 kernel: ACPI: Added _OSI(Module Device) Apr 16 04:16:20.158178 kernel: ACPI: Added _OSI(Processor Device) Apr 16 04:16:20.158187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 04:16:20.158703 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 04:16:20.158758 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 04:16:20.158767 kernel: ACPI: Interpreter enabled Apr 16 04:16:20.158777 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 04:16:20.158787 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 04:16:20.158797 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 04:16:20.158807 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 04:16:20.158816 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 04:16:20.158826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 04:16:20.164893 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 04:16:20.165831 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 04:16:20.165935 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 04:16:20.167350 kernel: PCI host bridge to bus 0000:00 Apr 16 04:16:20.169319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 04:16:20.169425 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 04:16:20.169514 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 04:16:20.169593 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 04:16:20.172689 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 04:16:20.172789 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 04:16:20.172870 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 04:16:20.175305 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 04:16:20.176629 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 04:16:20.176741 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 16 04:16:20.176828 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 16 04:16:20.176914 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 16 04:16:20.178040 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 04:16:20.178251 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 04:16:20.178350 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 16 04:16:20.178451 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 16 04:16:20.178564 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 04:16:20.178737 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 04:16:20.178831 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 16 04:16:20.178916 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 16 04:16:20.180420 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 04:16:20.181429 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 04:16:20.181553 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 16 04:16:20.181646 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 16 04:16:20.181735 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 04:16:20.181829 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 16 04:16:20.182805 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 04:16:20.182916 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 04:16:20.183119 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 04:16:20.183257 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 16 04:16:20.183349 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 16 04:16:20.183495 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 04:16:20.183585 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 16 04:16:20.183597 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 04:16:20.183607 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 04:16:20.183616 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 04:16:20.183631 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 04:16:20.183639 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 04:16:20.183648 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 04:16:20.183657 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 04:16:20.183666 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 04:16:20.183675 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 04:16:20.183684 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 04:16:20.183693 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 04:16:20.183703 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 04:16:20.183714 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 04:16:20.183726 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 04:16:20.183737 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 04:16:20.183746 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 04:16:20.183756 kernel: iommu: Default domain type: Translated Apr 16 04:16:20.183765 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 04:16:20.183775 kernel: PCI: Using ACPI for IRQ routing Apr 16 04:16:20.183785 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 04:16:20.183794 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 04:16:20.183806 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 04:16:20.183900 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 04:16:20.184880 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 04:16:20.186319 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 04:16:20.186340 kernel: vgaarb: loaded Apr 16 04:16:20.186350 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 04:16:20.186360 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 04:16:20.186370 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 04:16:20.186380 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 04:16:20.186394 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 04:16:20.186404 kernel: pnp: PnP ACPI init Apr 16 04:16:20.187938 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 04:16:20.187994 kernel: pnp: PnP ACPI: found 6 devices Apr 16 04:16:20.188005 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 04:16:20.188014 kernel: NET: Registered PF_INET protocol family Apr 16 04:16:20.188023 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 04:16:20.188032 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 04:16:20.188054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 04:16:20.188064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 04:16:20.188073 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 04:16:20.188082 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 04:16:20.188091 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.188100 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.188109 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 04:16:20.188118 kernel: NET: Registered PF_XDP protocol family Apr 16 04:16:20.188349 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 04:16:20.188494 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 04:16:20.188573 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 04:16:20.188645 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 04:16:20.188718 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 04:16:20.188797 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 04:16:20.188809 kernel: PCI: CLS 0 bytes, default 64 Apr 16 04:16:20.188819 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 04:16:20.188828 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:16:20.188842 kernel: Initialise system trusted keyrings Apr 16 04:16:20.188851 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 04:16:20.188860 kernel: Key type asymmetric registered Apr 16 04:16:20.188869 kernel: Asymmetric key parser 'x509' registered Apr 16 04:16:20.188877 kernel: hrtimer: interrupt took 11614199 ns Apr 16 04:16:20.188886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 04:16:20.188895 kernel: io scheduler mq-deadline registered Apr 16 04:16:20.188904 kernel: io scheduler kyber registered Apr 16 04:16:20.188913 kernel: io scheduler bfq registered Apr 16 04:16:20.188925 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 04:16:20.188934 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 04:16:20.188978 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 04:16:20.188988 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 04:16:20.188997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 04:16:20.189006 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 04:16:20.189015 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 04:16:20.189024 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 04:16:20.189033 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 04:16:20.196748 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 04:16:20.196920 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 04:16:20.205916 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 04:16:20.209570 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T04:16:16 UTC (1776312976) Apr 16 04:16:20.209677 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 04:16:20.209689 kernel: intel_pstate: CPU model not supported Apr 16 04:16:20.209699 kernel: NET: Registered PF_INET6 protocol family Apr 16 04:16:20.209709 kernel: Segment Routing with IPv6 Apr 16 04:16:20.209747 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 04:16:20.209756 kernel: NET: Registered PF_PACKET protocol family Apr 16 04:16:20.209765 kernel: Key type dns_resolver registered Apr 16 04:16:20.209775 kernel: IPI shorthand broadcast: enabled Apr 16 04:16:20.209784 kernel: sched_clock: Marking stable (6072132419, 794412842)->(7602701142, -736155881) Apr 16 04:16:20.209793 kernel: registered taskstats version 1 Apr 16 04:16:20.209802 kernel: Loading compiled-in X.509 certificates Apr 16 04:16:20.209812 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 04:16:20.209820 kernel: Key type .fscrypt registered Apr 16 04:16:20.209831 kernel: Key type fscrypt-provisioning registered Apr 16 04:16:20.209840 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 04:16:20.209849 kernel: ima: Allocated hash algorithm: sha1 Apr 16 04:16:20.209857 kernel: ima: No architecture policies found Apr 16 04:16:20.209866 kernel: clk: Disabling unused clocks Apr 16 04:16:20.209875 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 04:16:20.209883 kernel: Write protecting the kernel read-only data: 36864k Apr 16 04:16:20.209892 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 04:16:20.209900 kernel: Run /init as init process Apr 16 04:16:20.209912 kernel: with arguments: Apr 16 04:16:20.209921 kernel: /init Apr 16 04:16:20.209929 kernel: with environment: Apr 16 04:16:20.209938 kernel: HOME=/ Apr 16 04:16:20.209983 kernel: TERM=linux Apr 16 04:16:20.209997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 04:16:20.210010 systemd[1]: Detected virtualization kvm. Apr 16 04:16:20.210019 systemd[1]: Detected architecture x86-64. Apr 16 04:16:20.210032 systemd[1]: Running in initrd. Apr 16 04:16:20.210041 systemd[1]: No hostname configured, using default hostname. Apr 16 04:16:20.210049 systemd[1]: Hostname set to . Apr 16 04:16:20.210059 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:16:20.210068 systemd[1]: Queued start job for default target initrd.target. Apr 16 04:16:20.210077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:16:20.210086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:16:20.210096 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 04:16:20.210109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:16:20.210118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 04:16:20.213903 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 04:16:20.215072 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 04:16:20.218424 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 04:16:20.222397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:16:20.224848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:16:20.228323 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:16:20.228357 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:16:20.228368 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:16:20.228392 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:16:20.228403 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:16:20.228415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:16:20.228690 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:16:20.228700 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 04:16:20.228709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:16:20.228719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:16:20.228728 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:16:20.228738 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:16:20.228749 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 04:16:20.228806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:16:20.228816 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 04:16:20.228870 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 04:16:20.228890 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:16:20.228919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:16:20.228960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:20.228970 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 04:16:20.228991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:16:20.229023 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 04:16:20.229057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:16:20.232936 systemd-journald[195]: Collecting audit messages is disabled. Apr 16 04:16:20.237324 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:16:20.237340 systemd-journald[195]: Journal started Apr 16 04:16:20.237370 systemd-journald[195]: Runtime Journal (/run/log/journal/8c33969400ba4971b3e0e42f01ea6ca7) is 6.0M, max 48.4M, 42.3M free. Apr 16 04:16:20.241288 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 04:16:20.164528 systemd-modules-load[196]: Inserted module 'overlay' Apr 16 04:16:20.784638 kernel: Bridge firewalling registered Apr 16 04:16:20.249360 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 16 04:16:20.802008 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:16:20.802676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:16:20.819056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:21.083990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:21.127616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:16:21.162098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:16:21.182393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:16:21.269981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:21.291692 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 04:16:21.292056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:16:21.309668 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:16:21.413844 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:16:21.521642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:16:21.632880 dracut-cmdline[227]: dracut-dracut-053 Apr 16 04:16:21.703898 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:21.936024 systemd-resolved[237]: Positive Trust Anchors: Apr 16 04:16:21.936043 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:16:21.936068 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:16:21.967041 systemd-resolved[237]: Defaulting to hostname 'linux'. Apr 16 04:16:21.978871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:16:21.988102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:16:23.606538 kernel: SCSI subsystem initialized Apr 16 04:16:23.949337 kernel: Loading iSCSI transport class v2.0-870. Apr 16 04:16:24.196365 kernel: iscsi: registered transport (tcp) Apr 16 04:16:24.471590 kernel: iscsi: registered transport (qla4xxx) Apr 16 04:16:24.472081 kernel: QLogic iSCSI HBA Driver Apr 16 04:16:24.923915 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 04:16:24.971881 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 04:16:25.666915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 04:16:25.668617 kernel: device-mapper: uevent: version 1.0.3 Apr 16 04:16:25.668641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 04:16:27.307169 kernel: raid6: avx512x4 gen() 7738 MB/s Apr 16 04:16:27.340164 kernel: raid6: avx512x2 gen() 9055 MB/s Apr 16 04:16:27.353956 kernel: raid6: avx512x1 gen() 5518 MB/s Apr 16 04:16:27.386361 kernel: raid6: avx2x4 gen() 8333 MB/s Apr 16 04:16:27.415950 kernel: raid6: avx2x2 gen() 12023 MB/s Apr 16 04:16:27.469022 kernel: raid6: avx2x1 gen() 3915 MB/s Apr 16 04:16:27.469937 kernel: raid6: using algorithm avx2x2 gen() 12023 MB/s Apr 16 04:16:27.501705 kernel: raid6: .... xor() 3745 MB/s, rmw enabled Apr 16 04:16:27.503941 kernel: raid6: using avx512x2 recovery algorithm Apr 16 04:16:28.181006 kernel: xor: automatically using best checksumming function avx Apr 16 04:16:31.166435 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 04:16:31.202609 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:16:31.267535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:16:31.411379 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 16 04:16:31.441591 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:16:31.479444 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 04:16:31.533025 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Apr 16 04:16:31.731658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:16:31.753831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:16:31.992962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:16:32.049352 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 04:16:32.099192 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 04:16:32.117706 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:16:32.186437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:16:32.189857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:16:32.238640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 04:16:32.295185 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:16:32.437284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:16:32.437714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:32.455834 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:32.465172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:16:32.511485 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 04:16:32.512899 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 04:16:32.465845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:32.579863 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 04:16:32.477941 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:32.597677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:32.630848 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 04:16:32.630905 kernel: GPT:9289727 != 19775487 Apr 16 04:16:32.630916 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 04:16:32.630927 kernel: GPT:9289727 != 19775487 Apr 16 04:16:32.630937 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 04:16:32.630948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:32.854609 kernel: libata version 3.00 loaded. Apr 16 04:16:33.075915 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 04:16:33.081448 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 04:16:33.081472 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 04:16:33.081643 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 04:16:33.113546 kernel: scsi host0: ahci Apr 16 04:16:33.155802 kernel: scsi host1: ahci Apr 16 04:16:33.165619 kernel: scsi host2: ahci Apr 16 04:16:33.165789 kernel: scsi host3: ahci Apr 16 04:16:33.165927 kernel: scsi host4: ahci Apr 16 04:16:33.180019 kernel: scsi host5: ahci Apr 16 04:16:33.180838 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Apr 16 04:16:33.180855 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Apr 16 04:16:33.180866 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Apr 16 04:16:33.180877 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Apr 16 04:16:33.180888 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Apr 16 04:16:33.180898 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Apr 16 04:16:33.281027 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 04:16:33.281254 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 16 04:16:33.282249 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (466) Apr 16 04:16:33.312818 kernel: AES CTR mode by8 optimization enabled Apr 16 04:16:33.417579 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 04:16:33.585498 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.585535 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.585548 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.585569 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.514370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:33.606168 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 04:16:33.606232 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.606247 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 04:16:33.606259 kernel: ata3.00: applying bridge limits Apr 16 04:16:33.618515 kernel: ata3.00: configured for UDMA/100 Apr 16 04:16:33.620935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 04:16:33.635837 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 04:16:33.669249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:16:33.691072 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 04:16:33.700734 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 04:16:33.724053 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 04:16:33.741109 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:33.766616 disk-uuid[558]: Primary Header is updated. Apr 16 04:16:33.766616 disk-uuid[558]: Secondary Entries is updated. Apr 16 04:16:33.766616 disk-uuid[558]: Secondary Header is updated. Apr 16 04:16:33.802194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:33.819741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:33.886334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:33.908307 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 04:16:33.908590 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 04:16:33.946299 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 04:16:34.956337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:34.969313 disk-uuid[559]: The operation has completed successfully. Apr 16 04:16:35.076248 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 04:16:35.076395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 04:16:35.142933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 04:16:35.250602 sh[596]: Success Apr 16 04:16:35.397113 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 04:16:35.932709 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 04:16:36.018545 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 04:16:36.114142 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 04:16:36.231911 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 04:16:36.232439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:36.232457 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 04:16:36.237103 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 04:16:36.241921 kernel: BTRFS info (device dm-0): using free space tree Apr 16 04:16:36.298771 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 04:16:36.308359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 04:16:36.384119 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 04:16:36.424615 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 04:16:36.482540 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:36.482792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:36.482812 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:36.516855 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:36.591637 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 04:16:36.602818 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:36.663045 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 04:16:36.683799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 04:16:37.982961 ignition[692]: Ignition 2.19.0 Apr 16 04:16:37.983414 ignition[692]: Stage: fetch-offline Apr 16 04:16:37.983687 ignition[692]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:37.983706 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:37.984035 ignition[692]: parsed url from cmdline: "" Apr 16 04:16:37.984042 ignition[692]: no config URL provided Apr 16 04:16:37.984050 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 04:16:37.984062 ignition[692]: no config at "/usr/lib/ignition/user.ign" Apr 16 04:16:37.984161 ignition[692]: op(1): [started] loading QEMU firmware config module Apr 16 04:16:37.984167 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 04:16:38.052700 ignition[692]: op(1): [finished] loading QEMU firmware config module Apr 16 04:16:38.555517 ignition[692]: parsing config with SHA512: 5f8bfc4817841c12725bf94c78c386362b380ad0f4a11e9252079c8d5d75a79a2b17e1bb2ceb8bd2afddc9359af301acf7badcde5f94d65f33ed6b3947132aa7 Apr 16 04:16:38.604866 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:16:38.670286 unknown[692]: fetched base config from "system" Apr 16 04:16:38.675515 ignition[692]: fetch-offline: fetch-offline passed Apr 16 04:16:38.670299 unknown[692]: fetched user config from "qemu" Apr 16 04:16:38.676145 ignition[692]: Ignition finished successfully Apr 16 04:16:38.678500 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:16:38.694493 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:16:38.968979 systemd-networkd[784]: lo: Link UP Apr 16 04:16:38.973390 systemd-networkd[784]: lo: Gained carrier Apr 16 04:16:38.986102 systemd-networkd[784]: Enumeration completed Apr 16 04:16:38.986865 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:16:38.986869 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:16:38.989936 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:16:38.992873 systemd-networkd[784]: eth0: Link UP Apr 16 04:16:38.992877 systemd-networkd[784]: eth0: Gained carrier Apr 16 04:16:38.992888 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:16:39.025643 systemd[1]: Reached target network.target - Network. Apr 16 04:16:39.071046 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 04:16:39.108977 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:16:39.109885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 04:16:40.002352 ignition[788]: Ignition 2.19.0 Apr 16 04:16:40.002408 ignition[788]: Stage: kargs Apr 16 04:16:40.026844 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:40.026883 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:40.086687 ignition[788]: kargs: kargs passed Apr 16 04:16:40.086855 ignition[788]: Ignition finished successfully Apr 16 04:16:40.120956 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 04:16:40.195322 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 04:16:40.481563 ignition[797]: Ignition 2.19.0 Apr 16 04:16:40.490243 ignition[797]: Stage: disks Apr 16 04:16:40.491461 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:40.491480 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:40.494549 ignition[797]: disks: disks passed Apr 16 04:16:40.494659 ignition[797]: Ignition finished successfully Apr 16 04:16:40.514674 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 04:16:40.536072 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 04:16:40.536607 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:16:40.557763 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:16:40.569588 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:16:40.583642 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:16:40.611949 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 04:16:40.879129 systemd-networkd[784]: eth0: Gained IPv6LL Apr 16 04:16:40.966251 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 04:16:41.005596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 04:16:41.103948 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 04:16:41.912026 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 04:16:41.921861 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 04:16:41.948862 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 04:16:41.996794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:16:42.077976 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 04:16:42.088132 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 04:16:42.088550 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 04:16:42.151108 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 16 04:16:42.088591 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:16:42.167892 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:42.169591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:42.169613 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:42.162730 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 04:16:42.199988 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:42.225895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:16:42.301299 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 04:16:43.079665 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 04:16:43.149798 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 16 04:16:43.265887 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 04:16:43.316298 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 04:16:45.055694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 04:16:45.110444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 04:16:45.119852 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 04:16:45.299270 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:45.301956 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 04:16:45.411903 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 04:16:45.607143 ignition[931]: INFO : Ignition 2.19.0 Apr 16 04:16:45.610552 ignition[931]: INFO : Stage: mount Apr 16 04:16:45.610552 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:45.610552 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:45.610552 ignition[931]: INFO : mount: mount passed Apr 16 04:16:45.610552 ignition[931]: INFO : Ignition finished successfully Apr 16 04:16:45.643489 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 04:16:45.705091 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 04:16:46.330437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:16:46.609438 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 16 04:16:46.619474 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:46.632145 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:46.632722 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:46.653500 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:46.693818 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:16:47.439926 ignition[960]: INFO : Ignition 2.19.0 Apr 16 04:16:47.475869 ignition[960]: INFO : Stage: files Apr 16 04:16:47.475869 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:47.475869 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:47.475869 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 16 04:16:47.517583 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 04:16:47.517583 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 04:16:47.541508 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 04:16:47.565817 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 04:16:47.580165 unknown[960]: wrote ssh authorized keys file for user: core Apr 16 04:16:47.597597 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 04:16:47.597597 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 04:16:47.597597 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 04:16:47.597597 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:16:47.597597 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 04:16:47.953000 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 04:16:48.310830 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:16:48.310830 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 04:16:48.335838 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 04:16:48.335838 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:48.360494 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 04:16:49.056289 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 04:16:57.261846 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:57.276679 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 16 04:16:57.306408 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 16 04:16:57.318126 ignition[960]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 04:16:58.138553 ignition[960]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:16:58.391050 ignition[960]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:16:58.407689 ignition[960]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 04:16:58.407689 ignition[960]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 16 04:16:58.407689 ignition[960]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 04:16:58.436694 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:16:58.436694 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:16:58.436694 ignition[960]: INFO : files: files passed Apr 16 04:16:58.436694 ignition[960]: INFO : Ignition finished successfully Apr 16 04:16:58.451180 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 04:16:58.486576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 04:16:58.500814 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 04:16:58.507389 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 04:16:58.507533 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 04:16:58.576937 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 04:16:58.595229 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:16:58.595229 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:16:58.622663 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:16:58.647156 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:16:58.660904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 04:16:58.688964 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 04:16:59.400057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 04:16:59.409980 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 04:16:59.442953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 04:16:59.454858 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 04:16:59.465152 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 04:16:59.492194 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 04:16:59.854668 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:16:59.996007 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 04:17:00.372648 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:17:00.390270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:17:00.417137 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 04:17:00.420381 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 04:17:00.420975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:17:00.464879 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 04:17:00.465676 systemd[1]: Stopped target basic.target - Basic System. Apr 16 04:17:00.513908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 04:17:00.539806 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:17:00.550847 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 04:17:00.557730 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 04:17:00.579036 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:17:00.587997 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 04:17:00.656713 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 04:17:00.688839 systemd[1]: Stopped target swap.target - Swaps. Apr 16 04:17:00.713959 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 04:17:00.721161 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:17:00.753995 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:17:00.759523 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:17:00.797826 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 04:17:00.807555 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:17:00.815501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 04:17:00.816343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 04:17:00.857635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 04:17:00.858898 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:17:00.883074 systemd[1]: Stopped target paths.target - Path Units. Apr 16 04:17:00.897731 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 04:17:00.904840 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:17:00.940398 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 04:17:00.963455 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 04:17:00.981095 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 04:17:00.982503 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:17:01.011590 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 04:17:01.021431 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:17:01.094846 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 04:17:01.100167 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:17:01.148704 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 04:17:01.152809 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 04:17:01.202854 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 04:17:01.213187 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 04:17:01.217704 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:17:01.253946 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 04:17:01.260672 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 04:17:01.268546 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:17:01.279522 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 04:17:01.279851 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:17:01.317436 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 04:17:01.317586 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 04:17:01.429445 ignition[1015]: INFO : Ignition 2.19.0 Apr 16 04:17:01.436573 ignition[1015]: INFO : Stage: umount Apr 16 04:17:01.440068 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:17:01.440068 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:17:01.453627 ignition[1015]: INFO : umount: umount passed Apr 16 04:17:01.453627 ignition[1015]: INFO : Ignition finished successfully Apr 16 04:17:01.447237 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 04:17:01.457864 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 04:17:01.458100 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 04:17:01.469799 systemd[1]: Stopped target network.target - Network. Apr 16 04:17:01.478015 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 04:17:01.480919 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 04:17:01.491093 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 04:17:01.491382 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 04:17:01.507096 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 04:17:01.509627 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 04:17:01.536363 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 04:17:01.537142 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 04:17:01.551552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 04:17:01.559349 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 04:17:01.568601 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 16 04:17:01.569029 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 04:17:01.593083 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 04:17:01.594853 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 04:17:01.595057 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 04:17:01.639413 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 04:17:01.639693 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 04:17:01.719933 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 04:17:01.723541 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:17:01.819893 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 04:17:01.825096 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 04:17:01.893684 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 04:17:01.917000 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 04:17:01.921893 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:17:01.933916 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:17:01.949282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:17:01.967045 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 04:17:01.969267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 04:17:01.980394 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 04:17:01.995819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:17:02.011001 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:17:02.105601 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 04:17:02.111654 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:17:02.149883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 04:17:02.150012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 04:17:02.167988 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 04:17:02.169191 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:17:02.240860 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 04:17:02.254433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:17:02.328866 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 04:17:02.330599 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 04:17:02.344473 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:17:02.344730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:17:02.428723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 04:17:02.436439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 04:17:02.439257 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:17:02.454318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:17:02.454607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:17:02.462964 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 04:17:02.463172 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 04:17:02.746983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 04:17:02.752630 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 04:17:02.765069 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 04:17:02.873744 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 04:17:03.053054 systemd[1]: Switching root. Apr 16 04:17:03.206081 systemd-journald[195]: Journal stopped Apr 16 04:17:20.086158 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 16 04:17:20.086297 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 04:17:20.086323 kernel: SELinux: policy capability open_perms=1 Apr 16 04:17:20.086338 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 04:17:20.086358 kernel: SELinux: policy capability always_check_network=0 Apr 16 04:17:20.086372 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 04:17:20.086391 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 04:17:20.086406 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 04:17:20.086419 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 04:17:20.086441 kernel: audit: type=1403 audit(1776313024.691:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 04:17:20.086457 systemd[1]: Successfully loaded SELinux policy in 271.731ms. Apr 16 04:17:20.086481 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 311.732ms. Apr 16 04:17:20.086497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 04:17:20.086512 systemd[1]: Detected virtualization kvm. Apr 16 04:17:20.086527 systemd[1]: Detected architecture x86-64. Apr 16 04:17:20.086542 systemd[1]: Detected first boot. Apr 16 04:17:20.086557 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:17:20.086571 zram_generator::config[1075]: No configuration found. Apr 16 04:17:20.086588 systemd[1]: Populated /etc with preset unit settings. Apr 16 04:17:20.086600 systemd[1]: Queued start job for default target multi-user.target. Apr 16 04:17:20.086612 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 04:17:20.086628 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 04:17:20.086642 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 04:17:20.086654 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 04:17:20.086667 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 04:17:20.086681 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 04:17:20.086698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 04:17:20.086711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 04:17:20.086723 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 04:17:20.086736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:17:20.086748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:17:20.086760 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 04:17:20.086773 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 04:17:20.086785 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 04:17:20.086799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:17:20.086813 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 04:17:20.086825 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:17:20.086837 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 04:17:20.086849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:17:20.086861 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:17:20.086874 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:17:20.086886 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:17:20.086901 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 04:17:20.086915 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 04:17:20.086928 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:17:20.086941 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 04:17:20.086955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:17:20.086967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:17:20.086979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:17:20.086991 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 04:17:20.087003 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 04:17:20.087016 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 04:17:20.087032 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 04:17:20.087046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:20.087059 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 04:17:20.087071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 04:17:20.087084 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 04:17:20.087096 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 04:17:20.087107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:20.087119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:17:20.087131 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 04:17:20.087150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:20.087162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:17:20.089978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:17:20.090580 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 04:17:20.090607 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:17:20.090626 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 04:17:20.090643 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 16 04:17:20.090660 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 16 04:17:20.090731 kernel: ACPI: bus type drm_connector registered Apr 16 04:17:20.090747 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:17:20.090761 kernel: fuse: init (API version 7.39) Apr 16 04:17:20.090776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:17:20.090792 kernel: loop: module loaded Apr 16 04:17:20.090806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:17:20.090821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 04:17:20.090835 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:17:20.090850 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:20.090880 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 04:17:20.090893 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 04:17:20.090949 systemd-journald[1175]: Collecting audit messages is disabled. Apr 16 04:17:20.090982 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 04:17:20.090998 systemd-journald[1175]: Journal started Apr 16 04:17:20.091091 systemd-journald[1175]: Runtime Journal (/run/log/journal/8c33969400ba4971b3e0e42f01ea6ca7) is 6.0M, max 48.4M, 42.3M free. Apr 16 04:17:20.101304 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:17:20.103283 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 04:17:20.111880 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 04:17:20.122719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 04:17:20.131949 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 04:17:20.141911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:17:20.154552 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 04:17:20.154901 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 04:17:20.158873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:20.159146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:20.177960 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:17:20.178415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:17:20.181058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:17:20.181386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:17:20.201393 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 04:17:20.201828 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 04:17:20.207044 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:17:20.221552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:17:20.226924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:17:20.235658 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:17:20.250679 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 04:17:20.400763 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:17:20.452160 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 04:17:20.458909 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 04:17:20.467582 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 04:17:20.472580 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 04:17:20.504531 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 04:17:20.518090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:17:20.544089 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 04:17:20.554356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:17:20.565585 systemd-journald[1175]: Time spent on flushing to /var/log/journal/8c33969400ba4971b3e0e42f01ea6ca7 is 142.878ms for 938 entries. Apr 16 04:17:20.565585 systemd-journald[1175]: System Journal (/var/log/journal/8c33969400ba4971b3e0e42f01ea6ca7) is 8.0M, max 195.6M, 187.6M free. Apr 16 04:17:20.817345 systemd-journald[1175]: Received client request to flush runtime journal. Apr 16 04:17:20.565455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:17:20.696050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:17:20.722262 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:17:20.726628 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 04:17:20.739580 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 04:17:20.753899 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 04:17:20.784377 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 04:17:20.828555 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 04:17:20.832021 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 04:17:20.876617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:17:20.896134 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 16 04:17:20.919427 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 16 04:17:20.919492 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 16 04:17:20.946025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:17:20.970685 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 04:17:21.388774 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 04:17:21.417923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:17:21.587559 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 16 04:17:21.587620 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 16 04:17:21.608639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:17:28.230153 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 04:17:28.309064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:17:28.951718 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Apr 16 04:17:29.601734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:17:29.664675 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:17:29.736081 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 04:17:29.856176 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 16 04:17:29.882474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1260) Apr 16 04:17:30.568557 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 04:17:30.889839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:17:30.950269 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 04:17:30.970895 kernel: ACPI: button: Power Button [PWRF] Apr 16 04:17:31.017408 systemd-networkd[1252]: lo: Link UP Apr 16 04:17:31.018282 systemd-networkd[1252]: lo: Gained carrier Apr 16 04:17:31.033878 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 04:17:31.034193 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 04:17:31.046862 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 04:17:31.020846 systemd-networkd[1252]: Enumeration completed Apr 16 04:17:31.025861 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:17:31.048991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 04:17:31.054999 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:31.055042 systemd-networkd[1252]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:17:31.066974 systemd-networkd[1252]: eth0: Link UP Apr 16 04:17:31.066993 systemd-networkd[1252]: eth0: Gained carrier Apr 16 04:17:31.067074 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:31.183822 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 04:17:31.284812 systemd-networkd[1252]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:17:31.621317 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 04:17:31.797355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:17:32.968778 systemd-networkd[1252]: eth0: Gained IPv6LL Apr 16 04:17:33.021997 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 04:17:33.281590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:17:34.025169 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 04:17:34.097029 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 04:17:34.558803 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 04:17:35.219189 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 04:17:35.239852 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:17:35.417010 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 04:17:35.588323 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 04:17:35.834791 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 04:17:35.858122 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:17:35.873877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 04:17:35.874034 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:17:35.876140 systemd[1]: Reached target machines.target - Containers. Apr 16 04:17:35.907763 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 04:17:35.980415 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 04:17:36.035887 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 04:17:36.047133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:36.060334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 04:17:36.120377 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 04:17:36.169296 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 04:17:36.181551 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 04:17:36.190845 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 04:17:36.203403 kernel: loop0: detected capacity change from 0 to 228704 Apr 16 04:17:36.337068 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 04:17:36.338714 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 04:17:36.339864 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 04:17:36.417859 kernel: loop1: detected capacity change from 0 to 142488 Apr 16 04:17:36.722795 kernel: loop2: detected capacity change from 0 to 140768 Apr 16 04:17:37.043502 kernel: loop3: detected capacity change from 0 to 228704 Apr 16 04:17:37.219398 kernel: loop4: detected capacity change from 0 to 142488 Apr 16 04:17:37.407370 kernel: loop5: detected capacity change from 0 to 140768 Apr 16 04:17:37.897562 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 04:17:37.905985 (sd-merge)[1313]: Merged extensions into '/usr'. Apr 16 04:17:37.975897 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 04:17:37.975989 systemd[1]: Reloading... Apr 16 04:17:38.658442 zram_generator::config[1340]: No configuration found. Apr 16 04:17:41.012699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:17:42.336792 systemd[1]: Reloading finished in 4359 ms. Apr 16 04:17:42.511164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 04:17:42.687942 systemd[1]: Starting ensure-sysext.service... Apr 16 04:17:42.712546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:17:42.731711 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Apr 16 04:17:42.731726 systemd[1]: Reloading... Apr 16 04:17:42.881057 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:17:42.882619 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:17:42.883641 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:17:42.883962 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 16 04:17:42.884048 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 16 04:17:42.892255 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 04:17:42.906852 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:17:42.911439 systemd-tmpfiles[1383]: Skipping /boot Apr 16 04:17:43.171841 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:17:43.171856 systemd-tmpfiles[1383]: Skipping /boot Apr 16 04:17:43.371390 zram_generator::config[1408]: No configuration found. Apr 16 04:17:48.493826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:17:50.390099 systemd[1]: Reloading finished in 7657 ms. Apr 16 04:17:50.544974 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 04:17:50.653424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:17:50.970909 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 04:17:51.040861 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 04:17:51.147237 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 04:17:51.201396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:17:51.264278 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 04:17:51.294492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:51.294744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:51.308676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:51.406724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:17:51.450881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:17:51.505160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:17:51.515523 augenrules[1486]: No rules Apr 16 04:17:51.523642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:51.532974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:51.535485 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 04:17:51.561031 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 04:17:51.584180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:51.584515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:51.597146 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:17:51.604077 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:17:51.616669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:17:51.617850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:17:51.669718 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:17:51.674924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:17:51.700642 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 04:17:51.967591 systemd[1]: Finished ensure-sysext.service. Apr 16 04:17:51.978498 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 04:17:52.010752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:17:52.010883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:17:52.016683 systemd-resolved[1468]: Positive Trust Anchors: Apr 16 04:17:52.023707 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:17:52.023984 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:17:52.117811 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 04:17:52.148607 systemd-resolved[1468]: Defaulting to hostname 'linux'. Apr 16 04:17:52.171156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 04:17:52.173666 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:17:52.174076 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:17:52.281629 systemd[1]: Reached target network.target - Network. Apr 16 04:17:52.308613 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 04:17:52.340975 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:17:52.359460 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 04:17:53.198131 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 04:17:54.288532 systemd-resolved[1468]: Clock change detected. Flushing caches. Apr 16 04:17:54.295752 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 04:17:54.308069 systemd-timesyncd[1507]: Initial clock synchronization to Thu 2026-04-16 04:17:54.287555 UTC. Apr 16 04:17:54.326608 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:17:54.341832 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 04:17:54.357426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 04:17:54.521176 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 04:17:54.542497 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 04:17:54.543152 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:17:54.568913 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 04:17:54.609057 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 04:17:54.647581 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 04:17:54.674111 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:17:54.852185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 04:17:54.897294 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 04:17:54.994539 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 04:17:55.026129 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 04:17:55.070956 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:17:55.091204 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:17:55.113003 systemd[1]: System is tainted: cgroupsv1 Apr 16 04:17:55.126635 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:17:55.127351 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:17:55.187990 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 04:17:55.297118 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 04:17:55.320069 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 04:17:55.345817 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 04:17:55.378427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 04:17:55.390549 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 04:17:55.396365 jq[1518]: false Apr 16 04:17:55.509317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:17:55.529411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 04:17:55.577432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 04:17:55.602970 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 04:17:55.630929 extend-filesystems[1520]: Found loop3 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found loop4 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found loop5 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found sr0 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda1 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda2 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda3 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found usr Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda4 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda6 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda7 Apr 16 04:17:55.630929 extend-filesystems[1520]: Found vda9 Apr 16 04:17:55.630929 extend-filesystems[1520]: Checking size of /dev/vda9 Apr 16 04:17:55.651009 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 04:17:55.651222 dbus-daemon[1517]: [system] SELinux support is enabled Apr 16 04:17:55.723526 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 04:17:55.755194 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 04:17:55.762958 extend-filesystems[1520]: Resized partition /dev/vda9 Apr 16 04:17:55.807968 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 04:17:55.762201 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 04:17:55.826421 extend-filesystems[1540]: resize2fs 1.47.1 (20-May-2024) Apr 16 04:17:55.854409 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 04:17:55.866496 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 04:17:55.897893 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 04:17:55.976405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1553) Apr 16 04:17:55.938814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 04:17:55.995188 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 04:17:55.995188 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 04:17:55.995188 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 04:17:56.030281 jq[1551]: true Apr 16 04:17:56.030729 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Apr 16 04:17:56.040460 update_engine[1547]: I20260416 04:17:56.040263 1547 main.cc:92] Flatcar Update Engine starting Apr 16 04:17:56.063888 update_engine[1547]: I20260416 04:17:56.041834 1547 update_check_scheduler.cc:74] Next update check in 2m30s Apr 16 04:17:56.049066 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 04:17:56.049427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 04:17:56.063886 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 04:17:56.065058 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 04:17:56.079050 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 04:17:56.079360 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 04:17:56.097249 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 04:17:56.118013 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 04:17:56.118286 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 04:17:56.119512 systemd-logind[1539]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 04:17:56.119566 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 04:17:56.123329 systemd-logind[1539]: New seat seat0. Apr 16 04:17:56.141361 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 04:17:56.197352 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 04:17:56.238039 jq[1574]: true Apr 16 04:17:56.543866 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 04:17:56.547195 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 04:17:56.686979 tar[1570]: linux-amd64/LICENSE Apr 16 04:17:56.699137 tar[1570]: linux-amd64/helm Apr 16 04:17:56.711018 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 04:17:56.866810 systemd[1]: Started update-engine.service - Update Engine. Apr 16 04:17:56.952323 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 04:17:56.959912 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 04:17:56.977062 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 04:17:57.017598 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 04:17:57.018294 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 04:17:57.037318 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 04:17:57.060018 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 04:17:57.351636 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Apr 16 04:17:57.316075 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 04:17:57.549880 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 04:17:57.685132 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 04:17:58.168928 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 04:17:58.627427 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 04:17:58.927059 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 04:17:58.963566 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 04:17:58.971224 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 04:17:59.187780 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 04:17:59.680109 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 04:17:59.759722 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 04:18:00.057186 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 04:18:00.075716 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 04:18:00.571730 containerd[1577]: time="2026-04-16T04:18:00.486510873Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 04:18:01.282518 containerd[1577]: time="2026-04-16T04:18:01.279636628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.320569 containerd[1577]: time="2026-04-16T04:18:01.318765433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:01.331213 containerd[1577]: time="2026-04-16T04:18:01.330189653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 04:18:01.331213 containerd[1577]: time="2026-04-16T04:18:01.330620501Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 04:18:01.332254 containerd[1577]: time="2026-04-16T04:18:01.332222536Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 04:18:01.332706 containerd[1577]: time="2026-04-16T04:18:01.332327501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.332706 containerd[1577]: time="2026-04-16T04:18:01.332531410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:01.332706 containerd[1577]: time="2026-04-16T04:18:01.332544383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.333188 containerd[1577]: time="2026-04-16T04:18:01.333157081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:01.333278 containerd[1577]: time="2026-04-16T04:18:01.333261696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.333343 containerd[1577]: time="2026-04-16T04:18:01.333327778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:01.333390 containerd[1577]: time="2026-04-16T04:18:01.333379729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.333549 containerd[1577]: time="2026-04-16T04:18:01.333532001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.334136 containerd[1577]: time="2026-04-16T04:18:01.334078028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:01.335009 containerd[1577]: time="2026-04-16T04:18:01.334386669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:01.335009 containerd[1577]: time="2026-04-16T04:18:01.334410905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 04:18:01.335009 containerd[1577]: time="2026-04-16T04:18:01.334512458Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 04:18:01.335009 containerd[1577]: time="2026-04-16T04:18:01.334734627Z" level=info msg="metadata content store policy set" policy=shared Apr 16 04:18:01.423410 containerd[1577]: time="2026-04-16T04:18:01.422275830Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 04:18:01.464238 containerd[1577]: time="2026-04-16T04:18:01.463621180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 04:18:01.464238 containerd[1577]: time="2026-04-16T04:18:01.469893221Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 04:18:01.464238 containerd[1577]: time="2026-04-16T04:18:01.470274794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 04:18:01.464238 containerd[1577]: time="2026-04-16T04:18:01.470380296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 04:18:01.464238 containerd[1577]: time="2026-04-16T04:18:01.471240844Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 04:18:01.481184 containerd[1577]: time="2026-04-16T04:18:01.481033377Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.481788172Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.482011494Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.487909489Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488394891Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488423536Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488443263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488530536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488629423Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488833078Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488863099Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.488879632Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.489165610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.489262034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.488565 containerd[1577]: time="2026-04-16T04:18:01.489281956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489301135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489319296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489336903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489352714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489369652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489385659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489486681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489501946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489517532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489533670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489554807Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489611433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489630416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.491085 containerd[1577]: time="2026-04-16T04:18:01.489645038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.489905031Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.489933361Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.489948485Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.489963964Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.489978422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.490007605Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.490027940Z" level=info msg="NRI interface is disabled by configuration." Apr 16 04:18:01.492286 containerd[1577]: time="2026-04-16T04:18:01.490046467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 04:18:01.493167 containerd[1577]: time="2026-04-16T04:18:01.491050325Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 04:18:01.493167 containerd[1577]: time="2026-04-16T04:18:01.491224092Z" level=info msg="Connect containerd service" Apr 16 04:18:01.493167 containerd[1577]: time="2026-04-16T04:18:01.491541096Z" level=info msg="using legacy CRI server" Apr 16 04:18:01.493167 containerd[1577]: time="2026-04-16T04:18:01.491573043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 04:18:01.493167 containerd[1577]: time="2026-04-16T04:18:01.492032996Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 04:18:01.499559 containerd[1577]: time="2026-04-16T04:18:01.497882536Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:18:01.513931 containerd[1577]: time="2026-04-16T04:18:01.511272969Z" level=info msg="Start subscribing containerd event" Apr 16 04:18:01.513931 containerd[1577]: time="2026-04-16T04:18:01.512802508Z" level=info msg="Start recovering state" Apr 16 04:18:01.513931 containerd[1577]: time="2026-04-16T04:18:01.513401136Z" level=info msg="Start event monitor" Apr 16 04:18:01.513931 containerd[1577]: time="2026-04-16T04:18:01.513485179Z" level=info msg="Start snapshots syncer" Apr 16 04:18:01.513931 containerd[1577]: time="2026-04-16T04:18:01.513508597Z" level=info msg="Start cni network conf syncer for default" Apr 16 04:18:01.513931 containerd[1577]: time="2026-04-16T04:18:01.513517793Z" level=info msg="Start streaming server" Apr 16 04:18:01.549152 containerd[1577]: time="2026-04-16T04:18:01.543268158Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 04:18:01.629850 containerd[1577]: time="2026-04-16T04:18:01.596075115Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 04:18:01.637113 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 04:18:01.655140 containerd[1577]: time="2026-04-16T04:18:01.651560182Z" level=info msg="containerd successfully booted in 1.229056s" Apr 16 04:18:03.064737 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 04:18:03.137050 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:55554.service - OpenSSH per-connection server daemon (10.0.0.1:55554). Apr 16 04:18:04.847490 tar[1570]: linux-amd64/README.md Apr 16 04:18:05.097352 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 04:18:06.392396 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 55554 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:06.673609 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:07.418596 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 04:18:07.469735 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 04:18:07.493607 systemd-logind[1539]: New session 1 of user core. Apr 16 04:18:08.175349 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 04:18:08.249732 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 04:18:08.327440 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 04:18:10.127782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:10.200069 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:10.202498 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 04:18:11.052146 systemd[1662]: Queued start job for default target default.target. Apr 16 04:18:11.056251 systemd[1662]: Created slice app.slice - User Application Slice. Apr 16 04:18:11.056299 systemd[1662]: Reached target paths.target - Paths. Apr 16 04:18:11.056314 systemd[1662]: Reached target timers.target - Timers. Apr 16 04:18:11.098431 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 04:18:11.872449 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 04:18:11.873468 systemd[1662]: Reached target sockets.target - Sockets. Apr 16 04:18:11.873593 systemd[1662]: Reached target basic.target - Basic System. Apr 16 04:18:11.873763 systemd[1662]: Reached target default.target - Main User Target. Apr 16 04:18:11.874354 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 04:18:11.876017 systemd[1662]: Startup finished in 3.104s. Apr 16 04:18:11.963246 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 04:18:11.985420 systemd[1]: Startup finished in 53.585s (kernel) + 1min 6.405s (userspace) = 1min 59.991s. Apr 16 04:18:12.766057 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:49420.service - OpenSSH per-connection server daemon (10.0.0.1:49420). Apr 16 04:18:13.596886 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 49420 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:13.676578 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:14.218034 systemd-logind[1539]: New session 2 of user core. Apr 16 04:18:14.413977 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 04:18:15.245479 sshd[1689]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:15.303580 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:49420.service: Deactivated successfully. Apr 16 04:18:15.420590 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 04:18:15.547375 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Apr 16 04:18:15.673611 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Apr 16 04:18:15.748028 systemd-logind[1539]: Removed session 2. Apr 16 04:18:17.154293 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:17.156218 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:18.554944 systemd-logind[1539]: New session 3 of user core. Apr 16 04:18:18.876494 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 04:18:19.413716 sshd[1698]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:19.694981 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:54768.service - OpenSSH per-connection server daemon (10.0.0.1:54768). Apr 16 04:18:19.695982 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:54764.service: Deactivated successfully. Apr 16 04:18:20.267970 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 04:18:20.416088 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Apr 16 04:18:20.530964 kubelet[1677]: E0416 04:18:20.526626 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:20.554315 systemd-logind[1539]: Removed session 3. Apr 16 04:18:20.567358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:20.567603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:21.154642 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 54768 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:21.195636 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:22.274188 systemd-logind[1539]: New session 4 of user core. Apr 16 04:18:22.549494 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 04:18:23.163529 sshd[1704]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:23.460101 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:54778.service - OpenSSH per-connection server daemon (10.0.0.1:54778). Apr 16 04:18:23.468646 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:54768.service: Deactivated successfully. Apr 16 04:18:23.509493 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 04:18:23.520128 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Apr 16 04:18:23.522181 systemd-logind[1539]: Removed session 4. Apr 16 04:18:25.155039 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:25.194424 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:26.137063 systemd-logind[1539]: New session 5 of user core. Apr 16 04:18:26.470079 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 04:18:29.322520 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 04:18:29.361129 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:30.295305 sudo[1721]: pam_unix(sudo:session): session closed for user root Apr 16 04:18:30.476559 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:30.601267 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:39654.service - OpenSSH per-connection server daemon (10.0.0.1:39654). Apr 16 04:18:30.602262 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:54778.service: Deactivated successfully. Apr 16 04:18:30.717467 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 04:18:30.750963 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Apr 16 04:18:30.754193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 04:18:30.921386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:30.933358 systemd-logind[1539]: Removed session 5. Apr 16 04:18:31.417275 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 39654 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:31.588546 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:32.075408 systemd-logind[1539]: New session 6 of user core. Apr 16 04:18:32.244536 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 04:18:32.957980 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 04:18:32.960872 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:33.278784 sudo[1735]: pam_unix(sudo:session): session closed for user root Apr 16 04:18:33.687805 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 04:18:33.698810 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:34.596267 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 04:18:34.895624 auditctl[1739]: No rules Apr 16 04:18:35.037251 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:18:35.045211 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 04:18:35.712635 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 04:18:35.756556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:35.820877 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:36.706294 augenrules[1771]: No rules Apr 16 04:18:36.757480 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 04:18:36.943341 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 16 04:18:37.263643 sshd[1723]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:37.319288 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Apr 16 04:18:37.360182 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:39654.service: Deactivated successfully. Apr 16 04:18:37.528077 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 04:18:37.533622 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Apr 16 04:18:37.749575 systemd-logind[1539]: Removed session 6. Apr 16 04:18:38.082613 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:38.135434 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:38.845445 systemd-logind[1539]: New session 7 of user core. Apr 16 04:18:38.898580 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 04:18:39.222492 kubelet[1751]: E0416 04:18:39.221223 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:39.256491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:39.266486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:39.296427 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 04:18:39.321333 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:41.005936 update_engine[1547]: I20260416 04:18:40.997385 1547 update_attempter.cc:509] Updating boot flags... Apr 16 04:18:41.688204 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1805) Apr 16 04:18:42.022718 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1804) Apr 16 04:18:42.967278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1804) Apr 16 04:18:48.057254 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 04:18:48.667823 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 04:18:50.201059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 04:18:50.466748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:55.446791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:55.524289 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:56.046093 dockerd[1823]: time="2026-04-16T04:18:56.040390337Z" level=info msg="Starting up" Apr 16 04:18:57.359857 kubelet[1845]: E0416 04:18:57.354747 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:57.397085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:57.425644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:59.802647 dockerd[1823]: time="2026-04-16T04:18:59.792485838Z" level=info msg="Loading containers: start." Apr 16 04:19:07.748184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 04:19:07.857943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:08.545366 kernel: Initializing XFRM netlink socket Apr 16 04:19:11.199544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:11.316817 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:11.659966 systemd-networkd[1252]: docker0: Link UP Apr 16 04:19:12.459061 dockerd[1823]: time="2026-04-16T04:19:12.458211681Z" level=info msg="Loading containers: done." Apr 16 04:19:13.047431 kubelet[1952]: E0416 04:19:13.042071 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:13.058512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:13.090316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:13.741882 dockerd[1823]: time="2026-04-16T04:19:13.740514023Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 04:19:13.758559 dockerd[1823]: time="2026-04-16T04:19:13.757314478Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 04:19:13.769178 dockerd[1823]: time="2026-04-16T04:19:13.768055581Z" level=info msg="Daemon has completed initialization" Apr 16 04:19:14.000808 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck793070903-merged.mount: Deactivated successfully. Apr 16 04:19:31.635142 dockerd[1823]: time="2026-04-16T04:19:31.618204586Z" level=info msg="API listen on /run/docker.sock" Apr 16 04:19:31.634916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 04:19:31.711424 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 04:19:31.917802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:37.880194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:37.930765 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:38.975112 kubelet[2024]: E0416 04:19:38.974068 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:39.000274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:39.000889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:49.585527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 04:19:49.669561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:51.077057 containerd[1577]: time="2026-04-16T04:19:51.068101025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 04:19:53.878152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:54.071329 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:58.118603 kubelet[2051]: E0416 04:19:58.110103 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:58.136808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:58.147351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:00.284874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194866902.mount: Deactivated successfully. Apr 16 04:20:08.437076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 04:20:08.724545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:14.097248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:14.200495 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:16.084144 kubelet[2085]: E0416 04:20:16.083015 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:16.118481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:16.134793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:26.102378 update_engine[1547]: I20260416 04:20:26.088181 1547 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 04:20:26.102378 update_engine[1547]: I20260416 04:20:26.103899 1547 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 04:20:26.193497 update_engine[1547]: I20260416 04:20:26.120503 1547 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 04:20:26.193497 update_engine[1547]: I20260416 04:20:26.165422 1547 omaha_request_params.cc:62] Current group set to lts Apr 16 04:20:26.193497 update_engine[1547]: I20260416 04:20:26.179136 1547 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 04:20:26.193497 update_engine[1547]: I20260416 04:20:26.179299 1547 update_attempter.cc:643] Scheduling an action processor start. Apr 16 04:20:26.193497 update_engine[1547]: I20260416 04:20:26.179323 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 04:20:26.229180 update_engine[1547]: I20260416 04:20:26.190013 1547 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 04:20:26.229180 update_engine[1547]: I20260416 04:20:26.214155 1547 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 04:20:26.229180 update_engine[1547]: I20260416 04:20:26.214202 1547 omaha_request_action.cc:272] Request: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: Apr 16 04:20:26.229180 update_engine[1547]: I20260416 04:20:26.214223 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:26.226031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 16 04:20:26.230111 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 04:20:26.275801 update_engine[1547]: I20260416 04:20:26.274426 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:26.277326 update_engine[1547]: I20260416 04:20:26.277221 1547 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:26.277869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:26.344928 update_engine[1547]: E20260416 04:20:26.330219 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:26.374073 update_engine[1547]: I20260416 04:20:26.366801 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 04:20:29.716409 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:29.717503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:30.912443 kubelet[2140]: E0416 04:20:30.893165 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:30.939220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:30.939771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:37.031637 update_engine[1547]: I20260416 04:20:36.996583 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:37.103760 update_engine[1547]: I20260416 04:20:37.103470 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:37.118448 update_engine[1547]: I20260416 04:20:37.111940 1547 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:37.148560 update_engine[1547]: E20260416 04:20:37.144654 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:37.168408 update_engine[1547]: I20260416 04:20:37.167491 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 04:20:41.469164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 16 04:20:42.137448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:46.759411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:46.782254 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:47.053961 update_engine[1547]: I20260416 04:20:47.029579 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:47.070737 update_engine[1547]: I20260416 04:20:47.069049 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:47.070737 update_engine[1547]: I20260416 04:20:47.069978 1547 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:47.197956 update_engine[1547]: E20260416 04:20:47.197151 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:47.197956 update_engine[1547]: I20260416 04:20:47.197767 1547 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 04:20:48.858906 kubelet[2174]: E0416 04:20:48.858347 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:48.913458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:48.914527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:52.453777 containerd[1577]: time="2026-04-16T04:20:52.453143635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:20:52.464538 containerd[1577]: time="2026-04-16T04:20:52.464336990Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 16 04:20:52.535354 containerd[1577]: time="2026-04-16T04:20:52.532039378Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:20:53.459596 containerd[1577]: time="2026-04-16T04:20:53.457556636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:20:55.124795 containerd[1577]: time="2026-04-16T04:20:55.122982699Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1m4.049397874s" Apr 16 04:20:55.208302 containerd[1577]: time="2026-04-16T04:20:55.158106476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 04:20:55.570423 containerd[1577]: time="2026-04-16T04:20:55.567742495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 04:20:57.033211 update_engine[1547]: I20260416 04:20:57.025844 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.107658 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.261003 1547 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:57.327761 update_engine[1547]: E20260416 04:20:57.269598 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.275861 1547 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.276109 1547 omaha_request_action.cc:617] Omaha request response: Apr 16 04:20:57.327761 update_engine[1547]: E20260416 04:20:57.284058 1547 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.287024 1547 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.288714 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.288731 1547 update_attempter.cc:306] Processing Done. Apr 16 04:20:57.327761 update_engine[1547]: E20260416 04:20:57.289057 1547 update_attempter.cc:619] Update failed. Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.289130 1547 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.289140 1547 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.289175 1547 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.289888 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.290124 1547 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 04:20:57.327761 update_engine[1547]: I20260416 04:20:57.290139 1547 omaha_request_action.cc:272] Request: Apr 16 04:20:57.327761 update_engine[1547]: Apr 16 04:20:57.327761 update_engine[1547]: Apr 16 04:20:58.286091 update_engine[1547]: Apr 16 04:20:58.286091 update_engine[1547]: Apr 16 04:20:58.286091 update_engine[1547]: Apr 16 04:20:58.286091 update_engine[1547]: Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.290148 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.336467 1547 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.480154 1547 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:58.286091 update_engine[1547]: E20260416 04:20:57.541261 1547 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.541540 1547 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.545246 1547 omaha_request_action.cc:617] Omaha request response: Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.572983 1547 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.573588 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.573601 1547 update_attempter.cc:306] Processing Done. Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.587496 1547 update_attempter.cc:310] Error event sent. Apr 16 04:20:58.286091 update_engine[1547]: I20260416 04:20:57.677312 1547 update_check_scheduler.cc:74] Next update check in 44m37s Apr 16 04:20:58.557732 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 04:20:58.557732 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 04:20:59.971098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 16 04:21:02.073254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:21:23.858146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:21:24.084547 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:21:33.710261 kubelet[2194]: E0416 04:21:33.709641 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:21:33.769413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:21:33.769936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:21:45.076255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 16 04:21:45.998614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:21:55.179213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:21:55.949261 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:22:03.881484 kubelet[2220]: E0416 04:22:03.861992 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:22:03.945498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:22:03.946492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:22:15.299170 containerd[1577]: time="2026-04-16T04:22:15.297837483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:22:15.426983 containerd[1577]: time="2026-04-16T04:22:15.346921419Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 16 04:22:15.731949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 16 04:22:16.177035 containerd[1577]: time="2026-04-16T04:22:16.154427792Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:22:16.639144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:22:18.382402 containerd[1577]: time="2026-04-16T04:22:18.359600913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:22:19.344996 containerd[1577]: time="2026-04-16T04:22:19.339301553Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1m23.770332762s" Apr 16 04:22:19.470601 containerd[1577]: time="2026-04-16T04:22:19.427126338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 04:22:19.855180 containerd[1577]: time="2026-04-16T04:22:19.853841525Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 04:22:38.405478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:22:38.826467 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:22:49.664440 kubelet[2243]: E0416 04:22:49.661116 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:22:49.727207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:22:49.770462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:00.904563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 16 04:23:01.900246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:06.508588 containerd[1577]: time="2026-04-16T04:23:06.502572934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:06.516243 containerd[1577]: time="2026-04-16T04:23:06.513499112Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 16 04:23:06.536809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:06.575931 containerd[1577]: time="2026-04-16T04:23:06.568528154Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:06.584124 containerd[1577]: time="2026-04-16T04:23:06.584077383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:06.596159 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:06.712304 containerd[1577]: time="2026-04-16T04:23:06.710894503Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 46.8556556s" Apr 16 04:23:06.712304 containerd[1577]: time="2026-04-16T04:23:06.712632477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 04:23:06.875282 containerd[1577]: time="2026-04-16T04:23:06.859879315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 04:23:07.885201 kubelet[2273]: E0416 04:23:07.883615 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:07.904527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:07.923831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:18.253349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 16 04:23:19.829590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:29.447941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:29.522249 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:31.457641 kubelet[2300]: E0416 04:23:31.457105 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:31.475406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:31.492402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:42.287327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 16 04:23:42.860223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:43.799332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440021109.mount: Deactivated successfully. Apr 16 04:23:46.895079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:47.128502 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:49.761646 kubelet[2328]: E0416 04:23:49.760786 2328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:49.795135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:49.795639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:57.237106 containerd[1577]: time="2026-04-16T04:23:57.236353240Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 16 04:23:57.237106 containerd[1577]: time="2026-04-16T04:23:57.237975195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:57.243400 containerd[1577]: time="2026-04-16T04:23:57.243327646Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:57.773602 containerd[1577]: time="2026-04-16T04:23:57.772015425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:58.217346 containerd[1577]: time="2026-04-16T04:23:58.217026375Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 51.355891205s" Apr 16 04:23:58.217346 containerd[1577]: time="2026-04-16T04:23:58.217214066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 04:23:58.240333 containerd[1577]: time="2026-04-16T04:23:58.237911721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 04:24:00.223647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 16 04:24:00.672519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:05.402266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187937716.mount: Deactivated successfully. Apr 16 04:24:05.548421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:05.657619 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:06.634797 kubelet[2352]: E0416 04:24:06.634288 2352 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:06.669120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:06.677532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:16.894416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 16 04:24:17.106567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:20.318500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:20.321513 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:22.332528 kubelet[2416]: E0416 04:24:22.326362 2416 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:22.414572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:22.415110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:33.790998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 16 04:24:36.423768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:45.568607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:47.352251 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:58.344841 containerd[1577]: time="2026-04-16T04:24:58.235114348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:58.344841 containerd[1577]: time="2026-04-16T04:24:58.247628710Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 16 04:24:59.660231 containerd[1577]: time="2026-04-16T04:24:59.656542750Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:02.102535 kubelet[2446]: E0416 04:25:02.078831 2446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:02.129332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:02.132307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:03.380038 containerd[1577]: time="2026-04-16T04:25:03.366557619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:05.696629 containerd[1577]: time="2026-04-16T04:25:05.692385150Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1m7.451862996s" Apr 16 04:25:05.696629 containerd[1577]: time="2026-04-16T04:25:05.697418942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 04:25:05.860117 containerd[1577]: time="2026-04-16T04:25:05.858651119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 04:25:13.989767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 16 04:25:15.278152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:24.614093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279767400.mount: Deactivated successfully. Apr 16 04:25:24.666078 containerd[1577]: time="2026-04-16T04:25:24.648011777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:24.678311 containerd[1577]: time="2026-04-16T04:25:24.677431110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 04:25:24.729513 containerd[1577]: time="2026-04-16T04:25:24.728777732Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:25.265376 containerd[1577]: time="2026-04-16T04:25:25.258729732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:25.537339 containerd[1577]: time="2026-04-16T04:25:25.536629194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 19.671597327s" Apr 16 04:25:25.537339 containerd[1577]: time="2026-04-16T04:25:25.536832585Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 04:25:25.596320 containerd[1577]: time="2026-04-16T04:25:25.578151995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 04:25:25.963089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:25:26.142763 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:27.544490 kubelet[2473]: E0416 04:25:27.541843 2473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:27.574653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:27.581131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:31.588216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544311068.mount: Deactivated successfully. Apr 16 04:25:37.843027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 16 04:25:37.956142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:43.033125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:25:43.144359 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:51.360580 kubelet[2507]: E0416 04:25:51.354120 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:51.410255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:51.422239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:26:01.663308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 16 04:26:01.732016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:26:19.611535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:26:19.808066 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:26:30.620478 kubelet[2529]: E0416 04:26:30.580976 2529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:26:30.648374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:26:30.658051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:26:41.463830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 16 04:26:42.047211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:26:56.712424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:26:58.158397 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:27:34.167068 kubelet[2552]: E0416 04:27:34.160480 2552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:27:34.235628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:27:34.235986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:27:44.790990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 16 04:27:45.234653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:27:54.540178 containerd[1577]: time="2026-04-16T04:27:54.470612879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:27:54.551744 containerd[1577]: time="2026-04-16T04:27:54.548698705Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 16 04:27:55.016973 containerd[1577]: time="2026-04-16T04:27:55.015180793Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:27:57.611818 containerd[1577]: time="2026-04-16T04:27:57.600284368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:27:58.399700 containerd[1577]: time="2026-04-16T04:27:58.398956075Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2m32.808606514s" Apr 16 04:27:58.399700 containerd[1577]: time="2026-04-16T04:27:58.399441182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 04:28:02.565146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:02.659198 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:03.519107 kubelet[2635]: E0416 04:28:03.503567 2635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:03.543461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:03.543842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:16.304645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Apr 16 04:28:16.436267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:28:19.684778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:19.725008 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:20.653624 kubelet[2673]: E0416 04:28:20.652520 2673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:20.699497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:20.701938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:31.027492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Apr 16 04:28:31.517487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:28:38.321531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:38.924786 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:29:01.003458 kubelet[2697]: E0416 04:29:01.002302 2697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:29:01.022479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:29:01.023282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:29:11.680823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Apr 16 04:29:12.074232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:29:33.975550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:29:34.867012 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:29:38.344135 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:29:38.427382 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:29:38.451203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:29:40.453642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:29:42.064450 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-7.scope)... Apr 16 04:29:42.064536 systemd[1]: Reloading... Apr 16 04:29:48.805622 zram_generator::config[2776]: No configuration found. Apr 16 04:30:36.153531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:30:40.545872 systemd[1]: Reloading finished in 58479 ms. Apr 16 04:30:42.570303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:42.666523 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:30:42.939839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:42.953477 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:30:42.955885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:43.151947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:31:08.419338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:31:08.500773 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:31:09.678397 kubelet[2845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:31:09.678397 kubelet[2845]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:31:09.678397 kubelet[2845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:31:09.686849 kubelet[2845]: I0416 04:31:09.684850 2845 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:31:11.028814 kubelet[2845]: I0416 04:31:11.026638 2845 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 04:31:11.050953 kubelet[2845]: I0416 04:31:11.029544 2845 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:31:11.050953 kubelet[2845]: I0416 04:31:11.049095 2845 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:31:11.290808 kubelet[2845]: E0416 04:31:11.288108 2845 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:11.306782 kubelet[2845]: I0416 04:31:11.302851 2845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:31:11.502153 kubelet[2845]: E0416 04:31:11.493288 2845 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 04:31:11.506645 kubelet[2845]: I0416 04:31:11.502413 2845 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 04:31:11.948448 kubelet[2845]: I0416 04:31:11.945483 2845 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 04:31:12.070451 kubelet[2845]: I0416 04:31:12.066427 2845 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:31:12.104208 kubelet[2845]: I0416 04:31:12.072450 2845 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 04:31:12.117590 kubelet[2845]: I0416 04:31:12.110021 2845 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:31:12.117590 kubelet[2845]: I0416 04:31:12.112237 2845 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 04:31:12.139204 kubelet[2845]: I0416 04:31:12.139035 2845 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:31:12.161275 kubelet[2845]: I0416 04:31:12.160435 2845 kubelet.go:480] "Attempting to sync node with API server" Apr 16 04:31:12.161275 kubelet[2845]: I0416 04:31:12.161425 2845 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:31:12.168938 kubelet[2845]: I0416 04:31:12.163503 2845 kubelet.go:386] "Adding apiserver pod source" Apr 16 04:31:12.168938 kubelet[2845]: I0416 04:31:12.163743 2845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:31:12.173137 kubelet[2845]: E0416 04:31:12.172843 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:12.173137 kubelet[2845]: E0416 04:31:12.172844 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:12.188165 kubelet[2845]: I0416 04:31:12.187719 2845 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 04:31:12.202487 kubelet[2845]: I0416 04:31:12.200130 2845 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:31:12.226120 kubelet[2845]: W0416 04:31:12.219973 2845 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 04:31:12.462583 kubelet[2845]: I0416 04:31:12.459099 2845 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 04:31:12.473373 kubelet[2845]: I0416 04:31:12.462834 2845 server.go:1289] "Started kubelet" Apr 16 04:31:12.474290 kubelet[2845]: I0416 04:31:12.474025 2845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:31:12.549783 kubelet[2845]: I0416 04:31:12.543390 2845 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:31:12.553308 kubelet[2845]: I0416 04:31:12.551042 2845 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:31:12.565465 kubelet[2845]: E0416 04:31:12.560919 2845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc0e41884b5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,LastTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:12.590923 kubelet[2845]: I0416 04:31:12.590401 2845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:31:12.611204 kubelet[2845]: I0416 04:31:12.611126 2845 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 04:31:12.611872 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 16 04:31:12.684769 kubelet[2845]: E0416 04:31:12.675293 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.685424 kubelet[2845]: I0416 04:31:12.680571 2845 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 04:31:12.685479 kubelet[2845]: I0416 04:31:12.681188 2845 reconciler.go:26] "Reconciler: start to sync state" Apr 16 04:31:12.685517 kubelet[2845]: I0416 04:31:12.589397 2845 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:31:12.720948 kubelet[2845]: E0416 04:31:12.720246 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:12.720948 kubelet[2845]: E0416 04:31:12.720548 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 16 04:31:12.721640 kubelet[2845]: I0416 04:31:12.721233 2845 server.go:317] "Adding debug handlers to kubelet server" Apr 16 04:31:12.759460 kubelet[2845]: I0416 04:31:12.759047 2845 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:31:12.883322 kubelet[2845]: E0416 04:31:12.880736 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.962528 kubelet[2845]: E0416 04:31:12.962095 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 16 04:31:12.969269 kubelet[2845]: I0416 04:31:12.969124 2845 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:31:12.969269 kubelet[2845]: I0416 04:31:12.969247 2845 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:31:12.969724 systemd-tmpfiles[2859]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:31:12.970441 systemd-tmpfiles[2859]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:31:13.091139 kubelet[2845]: E0416 04:31:12.990514 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.975219 systemd-tmpfiles[2859]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:31:13.138399 kubelet[2845]: E0416 04:31:13.101374 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.138399 kubelet[2845]: E0416 04:31:13.102505 2845 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:31:12.985566 systemd-tmpfiles[2859]: ACLs are not supported, ignoring. Apr 16 04:31:12.985626 systemd-tmpfiles[2859]: ACLs are not supported, ignoring. Apr 16 04:31:13.070584 systemd-tmpfiles[2859]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:31:13.070592 systemd-tmpfiles[2859]: Skipping /boot Apr 16 04:31:13.156489 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 16 04:31:13.158931 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 16 04:31:13.202641 kubelet[2845]: E0416 04:31:13.202303 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.303331 kubelet[2845]: E0416 04:31:13.293333 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:13.311715 kubelet[2845]: E0416 04:31:13.311391 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.353560 kubelet[2845]: E0416 04:31:13.353121 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:13.375210 kubelet[2845]: E0416 04:31:13.374981 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 16 04:31:13.405360 kubelet[2845]: I0416 04:31:13.405244 2845 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 04:31:13.445359 kubelet[2845]: E0416 04:31:13.442126 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.452431 kubelet[2845]: I0416 04:31:13.452199 2845 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 04:31:13.452431 kubelet[2845]: I0416 04:31:13.452540 2845 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 04:31:13.461148 kubelet[2845]: I0416 04:31:13.457794 2845 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:31:13.461148 kubelet[2845]: I0416 04:31:13.458076 2845 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 04:31:13.461148 kubelet[2845]: E0416 04:31:13.458450 2845 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:31:13.475771 kubelet[2845]: E0416 04:31:13.473065 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:13.492357 kubelet[2845]: I0416 04:31:13.490587 2845 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:31:13.492357 kubelet[2845]: I0416 04:31:13.490630 2845 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:31:13.497817 kubelet[2845]: I0416 04:31:13.497326 2845 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:31:13.557025 kubelet[2845]: E0416 04:31:13.554503 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.559465 kubelet[2845]: E0416 04:31:13.557651 2845 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:13.566947 kubelet[2845]: E0416 04:31:13.564030 2845 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:31:13.567459 kubelet[2845]: I0416 04:31:13.567420 2845 policy_none.go:49] "None policy: Start" Apr 16 04:31:13.567694 kubelet[2845]: I0416 04:31:13.567644 2845 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 04:31:13.567853 kubelet[2845]: I0416 04:31:13.567821 2845 state_mem.go:35] "Initializing new in-memory state store" Apr 16 04:31:13.668369 kubelet[2845]: E0416 04:31:13.663303 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.702865 kubelet[2845]: E0416 04:31:13.700566 2845 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:31:13.707207 kubelet[2845]: I0416 04:31:13.703598 2845 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:31:13.707207 kubelet[2845]: I0416 04:31:13.703739 2845 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:31:13.740587 kubelet[2845]: I0416 04:31:13.740035 2845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:31:13.754619 kubelet[2845]: E0416 04:31:13.753055 2845 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:31:13.757307 kubelet[2845]: E0416 04:31:13.757182 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:13.917099 kubelet[2845]: I0416 04:31:13.908094 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:13.917823 kubelet[2845]: I0416 04:31:13.917716 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:13.917823 kubelet[2845]: I0416 04:31:13.917785 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:13.917823 kubelet[2845]: I0416 04:31:13.917808 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:13.924510 kubelet[2845]: I0416 04:31:13.917829 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:13.924510 kubelet[2845]: E0416 04:31:13.917449 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:13.939602 kubelet[2845]: I0416 04:31:13.938952 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:13.962957 kubelet[2845]: E0416 04:31:13.962531 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 04:31:13.969601 kubelet[2845]: E0416 04:31:13.969147 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:14.111282 kubelet[2845]: I0416 04:31:14.110462 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac78b58928b07e176febead72371547f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac78b58928b07e176febead72371547f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:31:14.111282 kubelet[2845]: I0416 04:31:14.110770 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac78b58928b07e176febead72371547f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac78b58928b07e176febead72371547f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:31:14.111282 kubelet[2845]: I0416 04:31:14.110813 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac78b58928b07e176febead72371547f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ac78b58928b07e176febead72371547f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:31:14.123140 kubelet[2845]: E0416 04:31:14.117569 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:14.145769 kubelet[2845]: E0416 04:31:14.145158 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:14.178446 kubelet[2845]: E0416 04:31:14.176433 2845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc0e41884b5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,LastTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:14.182817 kubelet[2845]: E0416 04:31:14.181086 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 16 04:31:14.206451 kubelet[2845]: I0416 04:31:14.203663 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:14.222804 kubelet[2845]: E0416 04:31:14.222198 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 04:31:14.225628 kubelet[2845]: I0416 04:31:14.225285 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:31:14.280844 kubelet[2845]: E0416 04:31:14.280421 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:14.290017 containerd[1577]: time="2026-04-16T04:31:14.286888965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 16 04:31:14.439177 kubelet[2845]: E0416 04:31:14.437647 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:14.441412 containerd[1577]: time="2026-04-16T04:31:14.441315818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 16 04:31:14.456545 kubelet[2845]: E0416 04:31:14.455132 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:14.474072 containerd[1577]: time="2026-04-16T04:31:14.468476142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ac78b58928b07e176febead72371547f,Namespace:kube-system,Attempt:0,}" Apr 16 04:31:14.702235 kubelet[2845]: I0416 04:31:14.690126 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:14.713458 kubelet[2845]: E0416 04:31:14.713266 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 04:31:14.809340 kubelet[2845]: E0416 04:31:14.808761 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:15.391445 kubelet[2845]: E0416 04:31:15.388244 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:15.504818 kubelet[2845]: E0416 04:31:15.504381 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:15.561349 kubelet[2845]: I0416 04:31:15.561041 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:15.565280 kubelet[2845]: E0416 04:31:15.562110 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 04:31:15.862084 kubelet[2845]: E0416 04:31:15.861496 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:15.862084 kubelet[2845]: E0416 04:31:15.861364 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="3.2s" Apr 16 04:31:17.212157 kubelet[2845]: E0416 04:31:17.211368 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:17.254734 kubelet[2845]: I0416 04:31:17.224473 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:17.254734 kubelet[2845]: E0416 04:31:17.254061 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 04:31:17.792311 kubelet[2845]: E0416 04:31:17.787726 2845 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:17.880969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312874550.mount: Deactivated successfully. Apr 16 04:31:18.065230 containerd[1577]: time="2026-04-16T04:31:18.061224543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.075021 containerd[1577]: time="2026-04-16T04:31:18.065734414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 04:31:18.121866 containerd[1577]: time="2026-04-16T04:31:18.120903707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.121866 containerd[1577]: time="2026-04-16T04:31:18.123238217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.213143 containerd[1577]: time="2026-04-16T04:31:18.124372091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 04:31:18.213143 containerd[1577]: time="2026-04-16T04:31:18.211624463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 04:31:18.226458 containerd[1577]: time="2026-04-16T04:31:18.225762218Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.303490 containerd[1577]: time="2026-04-16T04:31:18.301419282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.736380 containerd[1577]: time="2026-04-16T04:31:18.735870877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.293854321s" Apr 16 04:31:18.767074 containerd[1577]: time="2026-04-16T04:31:18.766527943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.463852797s" Apr 16 04:31:18.767074 containerd[1577]: time="2026-04-16T04:31:18.767222878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.292333622s" Apr 16 04:31:19.164067 kubelet[2845]: E0416 04:31:19.149969 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="6.4s" Apr 16 04:31:19.923447 containerd[1577]: time="2026-04-16T04:31:19.911926719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:31:19.923447 containerd[1577]: time="2026-04-16T04:31:19.914613881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:31:19.923447 containerd[1577]: time="2026-04-16T04:31:19.914626420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:19.923447 containerd[1577]: time="2026-04-16T04:31:19.915780115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:19.928812 containerd[1577]: time="2026-04-16T04:31:19.892247278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:31:19.928812 containerd[1577]: time="2026-04-16T04:31:19.926766979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:31:19.928812 containerd[1577]: time="2026-04-16T04:31:19.926857176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:19.939251 containerd[1577]: time="2026-04-16T04:31:19.927436100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:19.995525 containerd[1577]: time="2026-04-16T04:31:19.994576974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:31:19.995525 containerd[1577]: time="2026-04-16T04:31:19.995212888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:31:19.995525 containerd[1577]: time="2026-04-16T04:31:19.995248957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.065710 containerd[1577]: time="2026-04-16T04:31:19.997327045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.091620 kubelet[2845]: E0416 04:31:20.089090 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:20.697871 kubelet[2845]: I0416 04:31:20.690098 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:20.715832 kubelet[2845]: E0416 04:31:20.712149 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 04:31:20.962075 kubelet[2845]: E0416 04:31:20.958003 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:20.962075 kubelet[2845]: E0416 04:31:20.958057 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:21.056873 containerd[1577]: time="2026-04-16T04:31:21.054255997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaccfc0436d4e09e7cd41610d452955ebb4e21d3149f9ffd98fe8e0f6dfc1b7c\"" Apr 16 04:31:21.109586 kubelet[2845]: E0416 04:31:21.108726 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:21.202272 containerd[1577]: time="2026-04-16T04:31:21.185631314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ac78b58928b07e176febead72371547f,Namespace:kube-system,Attempt:0,} returns sandbox id \"33726de5e01ea650ff41e00e192257706db8875ccfab43fe44143c10a34727d5\"" Apr 16 04:31:21.293487 containerd[1577]: time="2026-04-16T04:31:21.292094600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\"" Apr 16 04:31:21.339567 kubelet[2845]: E0416 04:31:21.339139 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:21.410822 kubelet[2845]: E0416 04:31:21.410493 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:21.521940 containerd[1577]: time="2026-04-16T04:31:21.521094487Z" level=info msg="CreateContainer within sandbox \"eaccfc0436d4e09e7cd41610d452955ebb4e21d3149f9ffd98fe8e0f6dfc1b7c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 04:31:21.905504 containerd[1577]: time="2026-04-16T04:31:21.897604716Z" level=info msg="CreateContainer within sandbox \"33726de5e01ea650ff41e00e192257706db8875ccfab43fe44143c10a34727d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 04:31:22.006372 containerd[1577]: time="2026-04-16T04:31:22.004246494Z" level=info msg="CreateContainer within sandbox \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 04:31:22.113155 kubelet[2845]: E0416 04:31:22.112613 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:22.360268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207323291.mount: Deactivated successfully. Apr 16 04:31:22.601597 containerd[1577]: time="2026-04-16T04:31:22.600998762Z" level=info msg="CreateContainer within sandbox \"eaccfc0436d4e09e7cd41610d452955ebb4e21d3149f9ffd98fe8e0f6dfc1b7c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\"" Apr 16 04:31:22.670424 containerd[1577]: time="2026-04-16T04:31:22.667551910Z" level=info msg="StartContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\"" Apr 16 04:31:22.966894 containerd[1577]: time="2026-04-16T04:31:22.966818526Z" level=info msg="CreateContainer within sandbox \"33726de5e01ea650ff41e00e192257706db8875ccfab43fe44143c10a34727d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30571221f9359e0ac4a3ee9782bb25330e176ef4e23f45f4d66d4ea27ceec2f4\"" Apr 16 04:31:22.977266 containerd[1577]: time="2026-04-16T04:31:22.976393427Z" level=info msg="CreateContainer within sandbox \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\"" Apr 16 04:31:22.980877 containerd[1577]: time="2026-04-16T04:31:22.978384519Z" level=info msg="StartContainer for \"30571221f9359e0ac4a3ee9782bb25330e176ef4e23f45f4d66d4ea27ceec2f4\"" Apr 16 04:31:23.012266 containerd[1577]: time="2026-04-16T04:31:23.009314467Z" level=info msg="StartContainer for \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\"" Apr 16 04:31:23.776066 kubelet[2845]: E0416 04:31:23.771066 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:23.776087 systemd[1]: run-containerd-runc-k8s.io-7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a-runc.LqoYfQ.mount: Deactivated successfully. Apr 16 04:31:24.237971 kubelet[2845]: E0416 04:31:24.236862 2845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc0e41884b5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,LastTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:24.899393 containerd[1577]: time="2026-04-16T04:31:24.816955278Z" level=info msg="StartContainer for \"30571221f9359e0ac4a3ee9782bb25330e176ef4e23f45f4d66d4ea27ceec2f4\" returns successfully" Apr 16 04:31:25.289816 containerd[1577]: time="2026-04-16T04:31:25.289138309Z" level=info msg="StartContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" returns successfully" Apr 16 04:31:25.691712 kubelet[2845]: E0416 04:31:25.614255 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="7s" Apr 16 04:31:26.007726 containerd[1577]: time="2026-04-16T04:31:26.003931106Z" level=info msg="StartContainer for \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\" returns successfully" Apr 16 04:31:27.892485 kubelet[2845]: I0416 04:31:27.885961 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:28.626466 kubelet[2845]: E0416 04:31:28.618858 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:28.626466 kubelet[2845]: E0416 04:31:28.623331 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:30.615420 kubelet[2845]: E0416 04:31:30.614812 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:30.615420 kubelet[2845]: E0416 04:31:30.615859 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:31.993247 kubelet[2845]: E0416 04:31:31.990283 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:32.051987 kubelet[2845]: E0416 04:31:32.003036 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:32.051987 kubelet[2845]: E0416 04:31:32.023797 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:32.160245 kubelet[2845]: E0416 04:31:32.109521 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:32.173754 kubelet[2845]: E0416 04:31:32.169447 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:32.175084 kubelet[2845]: E0416 04:31:32.174991 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:33.843886 kubelet[2845]: E0416 04:31:33.798178 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:34.009917 kubelet[2845]: E0416 04:31:33.956967 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:34.009917 kubelet[2845]: E0416 04:31:33.957005 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:34.009917 kubelet[2845]: E0416 04:31:33.957345 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:34.009917 kubelet[2845]: E0416 04:31:33.957365 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:35.949557 kubelet[2845]: E0416 04:31:35.949214 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:36.035459 kubelet[2845]: E0416 04:31:35.949187 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:36.087257 kubelet[2845]: E0416 04:31:36.086723 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:36.160254 kubelet[2845]: E0416 04:31:36.146364 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:36.751070 kubelet[2845]: E0416 04:31:36.750388 2845 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:38.211932 kubelet[2845]: E0416 04:31:38.211353 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:38.412231 kubelet[2845]: E0416 04:31:38.411648 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 04:31:39.192474 kubelet[2845]: E0416 04:31:39.191883 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:39.192474 kubelet[2845]: E0416 04:31:39.192186 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:39.905847 kubelet[2845]: E0416 04:31:39.903177 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:40.015717 kubelet[2845]: E0416 04:31:40.015610 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:40.086400 kubelet[2845]: E0416 04:31:40.075534 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:40.217574 kubelet[2845]: E0416 04:31:40.217041 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:40.381343 kubelet[2845]: E0416 04:31:40.373078 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:41.139835 kubelet[2845]: E0416 04:31:41.139353 2845 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:42.800390 kubelet[2845]: E0416 04:31:42.799832 2845 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 04:31:44.614034 kubelet[2845]: E0416 04:31:43.869662 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:45.009002 kubelet[2845]: E0416 04:31:45.005822 2845 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6bc0e41884b5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,LastTimestamp:2026-04-16 04:31:12.459254623 +0000 UTC m=+3.923092776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:46.866928 kubelet[2845]: E0416 04:31:46.866496 2845 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:47.201731 kubelet[2845]: E0416 04:31:47.192777 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:47.275531 kubelet[2845]: I0416 04:31:47.275064 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:47.614340 kubelet[2845]: I0416 04:31:47.613934 2845 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:31:47.625398 kubelet[2845]: E0416 04:31:47.625296 2845 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:31:48.090539 kubelet[2845]: E0416 04:31:48.081458 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.214148 kubelet[2845]: E0416 04:31:48.208830 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.352854 kubelet[2845]: E0416 04:31:48.346746 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.458963 kubelet[2845]: E0416 04:31:48.457140 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.570551 kubelet[2845]: E0416 04:31:48.569877 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.717255 kubelet[2845]: E0416 04:31:48.717100 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.840053 kubelet[2845]: E0416 04:31:48.818388 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:48.975088 kubelet[2845]: E0416 04:31:48.966907 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.093265 kubelet[2845]: E0416 04:31:49.092061 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.194149 kubelet[2845]: E0416 04:31:49.193171 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.307879 kubelet[2845]: E0416 04:31:49.298727 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.416561 kubelet[2845]: E0416 04:31:49.403996 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.551955 kubelet[2845]: E0416 04:31:49.550373 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.690821 kubelet[2845]: E0416 04:31:49.677259 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.800466 kubelet[2845]: E0416 04:31:49.798820 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:49.949720 kubelet[2845]: E0416 04:31:49.947909 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:50.111834 kubelet[2845]: E0416 04:31:50.108945 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:50.375313 kubelet[2845]: E0416 04:31:50.254121 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:50.565025 kubelet[2845]: E0416 04:31:50.517096 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:50.651753 kubelet[2845]: E0416 04:31:50.647138 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:50.862467 kubelet[2845]: E0416 04:31:50.861772 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:50.996292 kubelet[2845]: E0416 04:31:50.987776 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:51.191328 kubelet[2845]: E0416 04:31:51.159580 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:52.609138 kubelet[2845]: E0416 04:31:51.569163 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:52.609138 kubelet[2845]: E0416 04:31:52.316679 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:53.135386 kubelet[2845]: E0416 04:31:52.960726 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:54.031465 kubelet[2845]: E0416 04:31:53.866205 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:54.788523 kubelet[2845]: E0416 04:31:54.780314 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:55.955143 kubelet[2845]: E0416 04:31:55.142954 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:59.617139 kubelet[2845]: E0416 04:31:59.615905 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:00.469211 kubelet[2845]: E0416 04:32:00.467109 2845 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:32:05.295057 kubelet[2845]: I0416 04:32:05.291980 2845 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:32:05.666412 kubelet[2845]: I0416 04:32:05.445876 2845 apiserver.go:52] "Watching apiserver" Apr 16 04:32:07.086231 kubelet[2845]: I0416 04:32:06.983339 2845 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 04:32:07.287014 kubelet[2845]: I0416 04:32:07.246020 2845 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:32:07.384254 kubelet[2845]: E0416 04:32:07.353642 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:07.467786 kubelet[2845]: I0416 04:32:07.467457 2845 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:32:07.993445 kubelet[2845]: E0416 04:32:07.992821 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:07.993445 kubelet[2845]: E0416 04:32:07.992893 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:11.450157 kubelet[2845]: E0416 04:32:11.449221 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.34s" Apr 16 04:32:15.375532 kubelet[2845]: E0416 04:32:15.359791 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.871s" Apr 16 04:32:16.725912 kubelet[2845]: I0416 04:32:16.724818 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.599041036 podStartE2EDuration="10.599041036s" podCreationTimestamp="2026-04-16 04:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:32:16.344524726 +0000 UTC m=+67.808362891" watchObservedRunningTime="2026-04-16 04:32:16.599041036 +0000 UTC m=+68.062879189" Apr 16 04:32:17.367194 kubelet[2845]: E0416 04:32:17.358207 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.86s" Apr 16 04:32:18.785370 kubelet[2845]: I0416 04:32:18.784438 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=11.784422057 podStartE2EDuration="11.784422057s" podCreationTimestamp="2026-04-16 04:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:32:16.955925147 +0000 UTC m=+68.419763308" watchObservedRunningTime="2026-04-16 04:32:18.784422057 +0000 UTC m=+70.248260221" Apr 16 04:32:18.928353 kubelet[2845]: E0416 04:32:18.923461 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.422s" Apr 16 04:32:20.406325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01-rootfs.mount: Deactivated successfully. Apr 16 04:32:20.497778 containerd[1577]: time="2026-04-16T04:32:20.465898303Z" level=info msg="shim disconnected" id=f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01 namespace=k8s.io Apr 16 04:32:20.497778 containerd[1577]: time="2026-04-16T04:32:20.466715273Z" level=warning msg="cleaning up after shim disconnected" id=f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01 namespace=k8s.io Apr 16 04:32:20.497778 containerd[1577]: time="2026-04-16T04:32:20.466734218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:32:21.468355 kubelet[2845]: I0416 04:32:21.451010 2845 scope.go:117] "RemoveContainer" containerID="f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01" Apr 16 04:32:21.468355 kubelet[2845]: E0416 04:32:21.463751 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:21.801656 kubelet[2845]: I0416 04:32:21.768040 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=14.767983181 podStartE2EDuration="14.767983181s" podCreationTimestamp="2026-04-16 04:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:32:18.797078936 +0000 UTC m=+70.260917102" watchObservedRunningTime="2026-04-16 04:32:21.767983181 +0000 UTC m=+73.231821344" Apr 16 04:32:21.927210 containerd[1577]: time="2026-04-16T04:32:21.926315262Z" level=info msg="CreateContainer within sandbox \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 04:32:22.846866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078505232.mount: Deactivated successfully. Apr 16 04:32:22.871789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720265110.mount: Deactivated successfully. Apr 16 04:32:23.127464 containerd[1577]: time="2026-04-16T04:32:23.120864193Z" level=info msg="CreateContainer within sandbox \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e\"" Apr 16 04:32:23.293448 kubelet[2845]: E0416 04:32:23.174106 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:23.341023 containerd[1577]: time="2026-04-16T04:32:23.302460923Z" level=info msg="StartContainer for \"d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e\"" Apr 16 04:32:25.153282 systemd[1]: run-containerd-runc-k8s.io-d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e-runc.xxNri3.mount: Deactivated successfully. Apr 16 04:32:26.856978 systemd[1]: Reloading requested from client PID 3209 ('systemctl') (unit session-7.scope)... Apr 16 04:32:26.857000 systemd[1]: Reloading... Apr 16 04:32:32.372896 containerd[1577]: time="2026-04-16T04:32:32.372638361Z" level=info msg="StartContainer for \"d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e\" returns successfully" Apr 16 04:32:33.453930 kubelet[2845]: E0416 04:32:33.453854 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:38.501181 kubelet[2845]: E0416 04:32:38.493992 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.01s" Apr 16 04:32:40.082392 zram_generator::config[3255]: No configuration found. Apr 16 04:32:40.517341 kubelet[2845]: E0416 04:32:40.515873 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:41.298387 kubelet[2845]: E0416 04:32:41.239602 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.34s" Apr 16 04:32:41.898421 kubelet[2845]: E0416 04:32:41.877929 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:42.956895 kubelet[2845]: E0416 04:32:42.956299 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.497s" Apr 16 04:32:43.103578 kubelet[2845]: E0416 04:32:43.103451 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:45.000977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:32:49.410231 kubelet[2845]: E0416 04:32:49.406630 2845 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.596s" Apr 16 04:32:49.622012 kubelet[2845]: E0416 04:32:49.621810 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:50.581020 systemd[1]: Reloading finished in 23719 ms. Apr 16 04:32:50.778497 kubelet[2845]: E0416 04:32:50.778400 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:32:51.393313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:32:51.462425 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:32:51.464948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:32:51.615313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:33:05.577642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:33:05.967622 (kubelet)[3319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:33:08.340507 kubelet[3319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:33:08.340507 kubelet[3319]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:33:08.340507 kubelet[3319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:33:08.340507 kubelet[3319]: I0416 04:33:08.340285 3319 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:33:08.716792 kubelet[3319]: I0416 04:33:08.680001 3319 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 04:33:08.716792 kubelet[3319]: I0416 04:33:08.703078 3319 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:33:08.730520 kubelet[3319]: I0416 04:33:08.730153 3319 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:33:08.841077 kubelet[3319]: I0416 04:33:08.840395 3319 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 04:33:08.948292 kubelet[3319]: I0416 04:33:08.946356 3319 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:33:09.617390 kubelet[3319]: E0416 04:33:09.598298 3319 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 04:33:09.669945 kubelet[3319]: I0416 04:33:09.618221 3319 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 04:33:10.004267 kubelet[3319]: I0416 04:33:10.003068 3319 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 04:33:10.071635 kubelet[3319]: I0416 04:33:10.070871 3319 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:33:10.071635 kubelet[3319]: I0416 04:33:10.071297 3319 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 04:33:10.071635 kubelet[3319]: I0416 04:33:10.072164 3319 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:33:10.071635 kubelet[3319]: I0416 04:33:10.072227 3319 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 04:33:10.176281 kubelet[3319]: I0416 04:33:10.072334 3319 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:33:10.176281 kubelet[3319]: I0416 04:33:10.080429 3319 kubelet.go:480] "Attempting to sync node with API server" Apr 16 04:33:10.176281 kubelet[3319]: I0416 04:33:10.083830 3319 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:33:10.176281 kubelet[3319]: I0416 04:33:10.157147 3319 kubelet.go:386] "Adding apiserver pod source" Apr 16 04:33:10.177950 kubelet[3319]: I0416 04:33:10.177860 3319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:33:10.355194 kubelet[3319]: I0416 04:33:10.349659 3319 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 04:33:10.380023 kubelet[3319]: I0416 04:33:10.379415 3319 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:33:10.756522 kubelet[3319]: I0416 04:33:10.705432 3319 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 04:33:10.756522 kubelet[3319]: I0416 04:33:10.705484 3319 server.go:1289] "Started kubelet" Apr 16 04:33:10.790136 kubelet[3319]: I0416 04:33:10.789739 3319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:33:10.801175 kubelet[3319]: I0416 04:33:10.798781 3319 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:33:10.842165 kubelet[3319]: I0416 04:33:10.705928 3319 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:33:10.984462 kubelet[3319]: I0416 04:33:10.984181 3319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:33:10.999500 kubelet[3319]: I0416 04:33:10.999170 3319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:33:11.088352 kubelet[3319]: I0416 04:33:11.079776 3319 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 04:33:11.105558 kubelet[3319]: I0416 04:33:11.072283 3319 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 04:33:11.113237 kubelet[3319]: I0416 04:33:11.111788 3319 reconciler.go:26] "Reconciler: start to sync state" Apr 16 04:33:11.478430 kubelet[3319]: I0416 04:33:11.475570 3319 apiserver.go:52] "Watching apiserver" Apr 16 04:33:11.478430 kubelet[3319]: I0416 04:33:11.476344 3319 server.go:317] "Adding debug handlers to kubelet server" Apr 16 04:33:11.617863 kubelet[3319]: I0416 04:33:11.617557 3319 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:33:11.664207 kubelet[3319]: I0416 04:33:11.663417 3319 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:33:11.852407 kubelet[3319]: E0416 04:33:11.847645 3319 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:33:12.063499 kubelet[3319]: W0416 04:33:12.028831 3319 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 16 04:33:13.409068 kubelet[3319]: I0416 04:33:13.362600 3319 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:33:15.794171 kubelet[3319]: I0416 04:33:15.790426 3319 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 04:33:16.088403 kubelet[3319]: I0416 04:33:16.072886 3319 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 04:33:16.230578 kubelet[3319]: I0416 04:33:16.107189 3319 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 04:33:16.248143 kubelet[3319]: I0416 04:33:16.246828 3319 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:33:16.248143 kubelet[3319]: I0416 04:33:16.247151 3319 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 04:33:16.248143 kubelet[3319]: E0416 04:33:16.247519 3319 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:33:16.395966 kubelet[3319]: E0416 04:33:16.367768 3319 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:33:17.164605 kubelet[3319]: E0416 04:33:17.149828 3319 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:33:17.589623 kubelet[3319]: E0416 04:33:17.585489 3319 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:33:18.484886 kubelet[3319]: E0416 04:33:18.479824 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:20.264248 kubelet[3319]: E0416 04:33:20.261972 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:23.612259 kubelet[3319]: E0416 04:33:23.592909 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:28.648354 kubelet[3319]: E0416 04:33:28.647308 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:33.881072 kubelet[3319]: E0416 04:33:33.880190 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:39.350334 kubelet[3319]: E0416 04:33:39.307095 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:44.932475 kubelet[3319]: E0416 04:33:44.931007 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:51.638628 kubelet[3319]: E0416 04:33:50.981265 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:57.488419 kubelet[3319]: E0416 04:33:57.408256 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:33:59.836324 kubelet[3319]: I0416 04:33:59.835901 3319 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.839352 3319 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.839604 3319 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.840267 3319 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.840277 3319 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.840330 3319 policy_none.go:49] "None policy: Start" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.840435 3319 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.840494 3319 state_mem.go:35] "Initializing new in-memory state store" Apr 16 04:33:59.851034 kubelet[3319]: I0416 04:33:59.847788 3319 state_mem.go:75] "Updated machine memory state" Apr 16 04:34:00.962168 kubelet[3319]: E0416 04:34:00.578189 3319 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:34:01.912784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e-rootfs.mount: Deactivated successfully. Apr 16 04:34:02.361894 kubelet[3319]: I0416 04:34:01.955503 3319 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:34:02.753597 containerd[1577]: time="2026-04-16T04:34:02.468492811Z" level=info msg="shim disconnected" id=d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e namespace=k8s.io Apr 16 04:34:03.094475 containerd[1577]: time="2026-04-16T04:34:02.790629066Z" level=warning msg="cleaning up after shim disconnected" id=d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e namespace=k8s.io Apr 16 04:34:03.094475 containerd[1577]: time="2026-04-16T04:34:02.963079258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:34:03.480505 kubelet[3319]: E0416 04:34:02.714241 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:34:03.480505 kubelet[3319]: I0416 04:34:02.668513 3319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:34:03.480505 kubelet[3319]: I0416 04:34:03.472940 3319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:34:06.127015 kubelet[3319]: E0416 04:34:06.116826 3319 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:34:08.042653 containerd[1577]: time="2026-04-16T04:34:07.984372150Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e Apr 16 04:34:08.581419 containerd[1577]: time="2026-04-16T04:34:08.020088279Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e delete" error="signal: killed" namespace=k8s.io Apr 16 04:34:08.673435 containerd[1577]: time="2026-04-16T04:34:08.644168661Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e namespace=k8s.io Apr 16 04:34:08.749278 kubelet[3319]: I0416 04:34:08.746603 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:34:08.994973 kubelet[3319]: I0416 04:34:08.818731 3319 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 04:34:09.097487 kubelet[3319]: I0416 04:34:09.071003 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:34:10.444845 kubelet[3319]: I0416 04:34:09.919622 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:34:11.213090 kubelet[3319]: I0416 04:34:10.878299 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:34:11.213090 kubelet[3319]: I0416 04:34:10.878712 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac78b58928b07e176febead72371547f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac78b58928b07e176febead72371547f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:34:11.213090 kubelet[3319]: I0416 04:34:10.879164 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:34:11.213090 kubelet[3319]: I0416 04:34:10.879349 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:34:11.213090 kubelet[3319]: I0416 04:34:11.039142 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac78b58928b07e176febead72371547f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ac78b58928b07e176febead72371547f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:34:11.539222 kubelet[3319]: I0416 04:34:11.039434 3319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac78b58928b07e176febead72371547f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac78b58928b07e176febead72371547f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:34:12.675500 kubelet[3319]: E0416 04:34:12.675006 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:34:13.396513 kubelet[3319]: E0416 04:34:13.371430 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:34:15.777438 kubelet[3319]: E0416 04:34:15.480503 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:34:16.655443 kubelet[3319]: I0416 04:34:16.654255 3319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:34:17.545935 kubelet[3319]: I0416 04:34:17.478337 3319 scope.go:117] "RemoveContainer" containerID="f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01" Apr 16 04:34:20.330588 kubelet[3319]: I0416 04:34:20.274207 3319 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 04:34:21.756379 kubelet[3319]: I0416 04:34:21.752757 3319 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:34:35.784145 kubelet[3319]: E0416 04:34:33.500523 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.638s" Apr 16 04:34:44.395633 kubelet[3319]: E0416 04:34:44.365797 3319 cri_stats_provider.go:468] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/vda9\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 16 04:34:44.923077 kubelet[3319]: E0416 04:34:44.785174 3319 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to list pod stats: make container stats: get filesystem info: Failed to get the info of the filesystem with mountpoint: cannot find filesystem info for device \"/dev/vda9\"" Apr 16 04:34:45.338108 containerd[1577]: time="2026-04-16T04:34:45.337580739Z" level=info msg="RemoveContainer for \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\"" Apr 16 04:34:48.255982 containerd[1577]: time="2026-04-16T04:34:48.162537004Z" level=info msg="RemoveContainer for \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\" returns successfully" Apr 16 04:34:50.563149 kubelet[3319]: I0416 04:34:50.550970 3319 scope.go:117] "RemoveContainer" containerID="f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01" Apr 16 04:34:51.358497 kubelet[3319]: I0416 04:34:51.203472 3319 scope.go:117] "RemoveContainer" containerID="d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e" Apr 16 04:34:52.485393 kubelet[3319]: E0416 04:34:52.481659 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:34:59.079433 containerd[1577]: time="2026-04-16T04:34:59.073295092Z" level=error msg="ContainerStatus for \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\": not found" Apr 16 04:35:01.049476 kubelet[3319]: E0416 04:35:01.041929 3319 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\": not found" containerID="f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01" Apr 16 04:35:01.460360 kubelet[3319]: I0416 04:35:01.360227 3319 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01"} err="failed to get container status \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0cd1de76fde0144559eec367362ba082ecaac769ce8d59bd589dfd2dece8f01\": not found" Apr 16 04:35:02.118879 containerd[1577]: time="2026-04-16T04:35:02.118252224Z" level=info msg="CreateContainer within sandbox \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 16 04:35:03.648266 kubelet[3319]: E0416 04:35:03.643256 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:35:06.298165 kubelet[3319]: E0416 04:35:06.290435 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.826s" Apr 16 04:35:11.209105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198746861.mount: Deactivated successfully. Apr 16 04:35:15.055789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929274925.mount: Deactivated successfully. Apr 16 04:35:23.509061 containerd[1577]: time="2026-04-16T04:35:23.384141549Z" level=info msg="CreateContainer within sandbox \"0898bbe01155cc69617da927e91e411a92d92864c0374e45b451c25228fff3dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\"" Apr 16 04:35:35.413034 containerd[1577]: time="2026-04-16T04:35:35.351029810Z" level=info msg="StartContainer for \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\"" Apr 16 04:35:43.203488 kubelet[3319]: E0416 04:35:43.172499 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="36.881s" Apr 16 04:35:48.461888 kubelet[3319]: E0416 04:35:48.461500 3319 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 16 04:35:49.883082 kubelet[3319]: E0416 04:35:48.863413 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:35:56.890254 kubelet[3319]: E0416 04:35:56.889734 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:36:15.390236 kubelet[3319]: E0416 04:36:15.383892 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:36:16.401625 kubelet[3319]: I0416 04:36:15.548663 3319 scope.go:117] "RemoveContainer" containerID="d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e" Apr 16 04:36:30.849758 kubelet[3319]: E0416 04:36:30.474086 3319 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:36:48.145447 kubelet[3319]: E0416 04:36:48.078308 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:36:49.573382 kubelet[3319]: E0416 04:36:49.552138 3319 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 04:36:55.590390 containerd[1577]: time="2026-04-16T04:36:55.275497456Z" level=info msg="RemoveContainer for \"d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e\"" Apr 16 04:36:58.755255 containerd[1577]: time="2026-04-16T04:36:58.701488676Z" level=error msg="get state for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="context deadline exceeded: unknown" Apr 16 04:36:58.755255 containerd[1577]: time="2026-04-16T04:36:58.702306133Z" level=warning msg="unknown status" status=0 Apr 16 04:37:04.007192 kubelet[3319]: E0416 04:37:03.918632 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m16.336s" Apr 16 04:37:05.918713 kubelet[3319]: E0416 04:37:04.118227 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:37:08.482060 containerd[1577]: time="2026-04-16T04:37:08.043538759Z" level=info msg="RemoveContainer for \"d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e\" returns successfully" Apr 16 04:37:09.079650 containerd[1577]: time="2026-04-16T04:37:08.844010041Z" level=error msg="get state for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="context deadline exceeded: unknown" Apr 16 04:37:09.791381 containerd[1577]: time="2026-04-16T04:37:09.437049356Z" level=warning msg="unknown status" status=0 Apr 16 04:37:15.545537 containerd[1577]: time="2026-04-16T04:37:15.080776510Z" level=error msg="ttrpc: received message on inactive stream" stream=11 Apr 16 04:37:17.650145 containerd[1577]: time="2026-04-16T04:37:16.864181971Z" level=error msg="ttrpc: received message on inactive stream" stream=13 Apr 16 04:37:24.778589 containerd[1577]: time="2026-04-16T04:37:23.636863351Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Apr 16 04:37:25.695438 containerd[1577]: time="2026-04-16T04:37:23.768414089Z" level=error msg="get state for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="context deadline exceeded: unknown" Apr 16 04:37:26.295451 containerd[1577]: time="2026-04-16T04:37:26.265191469Z" level=info msg="StartContainer for \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" returns successfully" Apr 16 04:37:26.295451 containerd[1577]: time="2026-04-16T04:37:24.888645826Z" level=warning msg="unknown status" status=0 Apr 16 04:37:35.869032 kubelet[3319]: E0416 04:37:35.868180 3319 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:37:38.680935 kubelet[3319]: E0416 04:37:36.982593 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:37:47.004536 containerd[1577]: time="2026-04-16T04:37:46.989344622Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 16 04:37:48.872574 containerd[1577]: time="2026-04-16T04:37:47.545598657Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 16 04:37:50.627232 containerd[1577]: time="2026-04-16T04:37:50.593267893Z" level=error msg="failed to handle container TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" error="failed to stop container: context deadline exceeded: unknown" Apr 16 04:37:53.055484 containerd[1577]: time="2026-04-16T04:37:52.969160101Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:37:55.485349 kubelet[3319]: E0416 04:37:55.460236 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:04.458398 containerd[1577]: time="2026-04-16T04:38:03.869554743Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 16 04:38:05.444829 containerd[1577]: time="2026-04-16T04:38:04.876511594Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 16 04:38:05.752193 containerd[1577]: time="2026-04-16T04:38:05.537465688Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:38:08.683493 containerd[1577]: time="2026-04-16T04:38:08.623775319Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:38:11.496449 kubelet[3319]: E0416 04:38:11.489082 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:12.795183 kubelet[3319]: E0416 04:38:12.789658 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m7.155s" Apr 16 04:38:14.451455 kubelet[3319]: E0416 04:38:14.451132 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:38:16.089164 kubelet[3319]: E0416 04:38:15.887196 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:38:18.975911 containerd[1577]: time="2026-04-16T04:38:18.974520189Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:38:20.900932 containerd[1577]: time="2026-04-16T04:38:20.052047692Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 16 04:38:20.900932 containerd[1577]: time="2026-04-16T04:38:20.900263537Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 16 04:38:25.268616 containerd[1577]: time="2026-04-16T04:38:25.260578087Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:38:34.384259 containerd[1577]: time="2026-04-16T04:38:34.375283630Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:38:34.384259 containerd[1577]: time="2026-04-16T04:38:34.303631234Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 16 04:38:34.384259 containerd[1577]: time="2026-04-16T04:38:34.379648767Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 16 04:38:35.740988 kubelet[3319]: E0416 04:38:35.737237 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:45.135916 kubelet[3319]: E0416 04:38:45.133846 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.39s" Apr 16 04:38:45.794365 containerd[1577]: time="2026-04-16T04:38:44.796548283Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:38:48.854130 kubelet[3319]: I0416 04:38:48.847347 3319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e" Apr 16 04:38:49.563591 kubelet[3319]: E0416 04:38:49.562311 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:51.662314 kubelet[3319]: E0416 04:38:51.660882 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:38:56.809578 containerd[1577]: time="2026-04-16T04:38:56.683252800Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:38:58.554987 containerd[1577]: time="2026-04-16T04:38:57.053813099Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 16 04:38:58.554987 containerd[1577]: time="2026-04-16T04:38:57.071365433Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 16 04:39:01.746377 kubelet[3319]: E0416 04:39:01.745541 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:39:16.512865 containerd[1577]: time="2026-04-16T04:39:16.467998381Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:39:20.871114 kubelet[3319]: E0416 04:39:20.865223 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:39:26.834982 containerd[1577]: time="2026-04-16T04:39:26.409629145Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 16 04:39:27.714471 containerd[1577]: time="2026-04-16T04:39:27.212296986Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 16 04:39:27.714471 containerd[1577]: time="2026-04-16T04:39:27.287230463Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:39:30.110951 kubelet[3319]: E0416 04:39:30.001374 3319 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:39:38.205690 kubelet[3319]: E0416 04:39:38.177404 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:39:52.141261 kubelet[3319]: E0416 04:39:51.978345 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m3.122s" Apr 16 04:40:00.000312 kubelet[3319]: E0416 04:39:55.883392 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:00.657029 containerd[1577]: time="2026-04-16T04:40:00.637064880Z" level=info msg="StopContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" with timeout 30 (s)" Apr 16 04:40:00.906174 containerd[1577]: time="2026-04-16T04:40:00.634875771Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:40:01.817187 containerd[1577]: time="2026-04-16T04:40:01.783613698Z" level=info msg="Stop container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" with signal terminated" Apr 16 04:40:04.069902 kubelet[3319]: E0416 04:40:04.066348 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:06.572100 kubelet[3319]: I0416 04:40:06.567362 3319 status_manager.go:418] "Container startup changed for unknown container" pod="kube-system/kube-controller-manager-localhost" containerID="containerd://d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e" Apr 16 04:40:08.747645 containerd[1577]: time="2026-04-16T04:40:08.743240119Z" level=error msg="get state for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="context deadline exceeded: unknown" Apr 16 04:40:10.446553 containerd[1577]: time="2026-04-16T04:40:09.402481836Z" level=warning msg="unknown status" status=0 Apr 16 04:40:11.715829 kubelet[3319]: I0416 04:40:09.997351 3319 status_manager.go:418] "Container startup changed for unknown container" pod="kube-system/kube-controller-manager-localhost" containerID="containerd://d2a12358fd1130deff04c55cdfdfdbb456ce8c7ebf309b3c51a01ebee50fda0e" Apr 16 04:40:13.909572 kubelet[3319]: E0416 04:40:11.973172 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:19.078471 containerd[1577]: time="2026-04-16T04:40:18.713511016Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 16 04:40:19.658142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a-rootfs.mount: Deactivated successfully. Apr 16 04:40:23.164282 containerd[1577]: time="2026-04-16T04:40:22.389988155Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 16 04:40:23.574356 containerd[1577]: time="2026-04-16T04:40:22.870221673Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 16 04:40:26.369460 kubelet[3319]: E0416 04:40:26.369103 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:28.554343 kubelet[3319]: E0416 04:40:28.485092 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:40.474992 kubelet[3319]: E0416 04:40:40.468599 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:45.768776 kubelet[3319]: E0416 04:40:45.753570 3319 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:40:50.395989 containerd[1577]: time="2026-04-16T04:40:50.358295841Z" level=info msg="Kill container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\"" Apr 16 04:40:52.875258 kubelet[3319]: E0416 04:40:52.874560 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:54.515162 kubelet[3319]: E0416 04:40:49.814411 3319 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 04:41:14.566077 kubelet[3319]: E0416 04:41:14.550503 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:41:26.438605 kubelet[3319]: E0416 04:41:26.380851 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:41:27.046803 kubelet[3319]: E0416 04:41:26.885556 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m12.187s" Apr 16 04:41:27.593248 containerd[1577]: time="2026-04-16T04:41:27.583389820Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:41:28.266320 kubelet[3319]: E0416 04:41:27.671733 3319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:41:40.429199 containerd[1577]: time="2026-04-16T04:41:39.684524385Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 16 04:41:42.501122 containerd[1577]: time="2026-04-16T04:41:39.705157891Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:41:44.516417 kubelet[3319]: E0416 04:41:44.493525 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:41:47.432029 kubelet[3319]: E0416 04:41:47.342241 3319 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:41:56.857513 kubelet[3319]: E0416 04:41:56.857001 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:42:05.903223 containerd[1577]: time="2026-04-16T04:42:05.875899092Z" level=error msg="failed to handle container TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 16 04:42:07.664836 kubelet[3319]: E0416 04:42:06.861275 3319 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 04:42:08.592760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b-rootfs.mount: Deactivated successfully. Apr 16 04:42:09.161648 containerd[1577]: time="2026-04-16T04:42:07.816987304Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:42:10.053473 containerd[1577]: time="2026-04-16T04:42:09.170044824Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 16 04:42:10.498107 kubelet[3319]: E0416 04:42:09.774145 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:42:10.872317 kubelet[3319]: E0416 04:42:10.748649 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:42:18.063113 containerd[1577]: time="2026-04-16T04:42:18.048773211Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 16 04:42:18.379404 containerd[1577]: time="2026-04-16T04:42:18.060310075Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 16 04:42:22.358182 containerd[1577]: time="2026-04-16T04:42:22.350927069Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:42:27.555361 containerd[1577]: time="2026-04-16T04:42:27.413534245Z" level=error msg="StopContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" to be killed: wait container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\": context deadline exceeded" Apr 16 04:42:28.702360 kubelet[3319]: E0416 04:42:27.817328 3319 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" Apr 16 04:42:29.502432 kubelet[3319]: E0416 04:42:29.274791 3319 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" gracePeriod=30 Apr 16 04:42:29.502432 kubelet[3319]: E0416 04:42:29.275371 3319 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a"} pod="kube-system/kube-scheduler-localhost" Apr 16 04:42:30.388498 kubelet[3319]: E0416 04:42:29.276566 3319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 16 04:42:31.156251 kubelet[3319]: E0416 04:42:30.702367 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="57.835s" Apr 16 04:42:31.899111 containerd[1577]: time="2026-04-16T04:42:31.878280106Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:42:32.505054 kubelet[3319]: E0416 04:42:31.996277 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:42:32.870989 containerd[1577]: time="2026-04-16T04:42:32.627004584Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 16 04:42:32.919986 containerd[1577]: time="2026-04-16T04:42:32.859865974Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 16 04:42:33.477618 kubelet[3319]: E0416 04:42:33.474566 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:42:36.716449 containerd[1577]: time="2026-04-16T04:42:36.714343269Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:42:39.052793 kubelet[3319]: E0416 04:42:39.015045 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:42:44.115225 containerd[1577]: time="2026-04-16T04:42:44.069250872Z" level=info msg="StopContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" with timeout 30 (s)" Apr 16 04:42:45.846155 containerd[1577]: time="2026-04-16T04:42:45.495433062Z" level=info msg="Skipping the sending of signal terminated to container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" because a prior stop with timeout>0 request already sent the signal" Apr 16 04:42:46.596965 kubelet[3319]: E0416 04:42:46.566351 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:42:47.993382 containerd[1577]: time="2026-04-16T04:42:47.969024934Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 16 04:42:48.817274 containerd[1577]: time="2026-04-16T04:42:48.792610833Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 16 04:42:54.647210 kubelet[3319]: E0416 04:42:54.585351 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:42:57.649284 containerd[1577]: time="2026-04-16T04:42:57.549291886Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:42:59.868334 kubelet[3319]: E0416 04:42:59.770386 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:03.082406 kubelet[3319]: E0416 04:43:02.901298 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:43:08.317059 containerd[1577]: time="2026-04-16T04:43:08.283702856Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:43:09.153956 containerd[1577]: time="2026-04-16T04:43:08.498261218Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 16 04:43:09.395157 containerd[1577]: time="2026-04-16T04:43:09.343879109Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 16 04:43:10.073738 kubelet[3319]: E0416 04:43:10.021171 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.535s" Apr 16 04:43:13.298286 kubelet[3319]: E0416 04:43:12.201522 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:16.313466 containerd[1577]: time="2026-04-16T04:43:16.311135449Z" level=info msg="Kill container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\"" Apr 16 04:43:26.038767 containerd[1577]: time="2026-04-16T04:43:25.982875961Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:43:27.274400 kubelet[3319]: E0416 04:43:27.252398 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:29.038133 kubelet[3319]: E0416 04:43:29.037303 3319 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:43:35.392232 containerd[1577]: time="2026-04-16T04:43:35.378713264Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:43:35.392232 containerd[1577]: time="2026-04-16T04:43:35.384845135Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 16 04:43:35.392232 containerd[1577]: time="2026-04-16T04:43:35.388930303Z" level=error msg="ttrpc: received message on inactive stream" stream=83 Apr 16 04:43:38.059457 kubelet[3319]: E0416 04:43:38.047424 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:43:39.301886 kubelet[3319]: E0416 04:43:38.415435 3319 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 04:43:43.045560 kubelet[3319]: E0416 04:43:43.041612 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:44.986147 containerd[1577]: time="2026-04-16T04:43:44.954048091Z" level=info msg="StopContainer for \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" with timeout 30 (s)" Apr 16 04:43:46.890321 containerd[1577]: time="2026-04-16T04:43:46.887359390Z" level=info msg="Stop container \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" with signal terminated" Apr 16 04:43:53.781505 containerd[1577]: time="2026-04-16T04:43:53.674449283Z" level=info msg="TaskExit event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887}" Apr 16 04:43:56.968018 kubelet[3319]: E0416 04:43:56.945938 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="42.634s" Apr 16 04:43:59.465817 kubelet[3319]: E0416 04:43:59.456183 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:06.506947 containerd[1577]: time="2026-04-16T04:44:06.179198126Z" level=error msg="Failed to handle backOff event container_id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" id:\"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" pid:3082 exit_status:1 exited_at:{seconds:1776314253 nanos:713733887} for 7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:44:07.294839 containerd[1577]: time="2026-04-16T04:44:06.254386218Z" level=error msg="ttrpc: received message on inactive stream" stream=117 Apr 16 04:44:08.491470 containerd[1577]: time="2026-04-16T04:44:08.482880277Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:44:12.492484 kubelet[3319]: E0416 04:44:12.491478 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:18.001492 containerd[1577]: time="2026-04-16T04:44:17.996571239Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:44:18.878374 containerd[1577]: time="2026-04-16T04:44:18.697305723Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 16 04:44:18.888482 containerd[1577]: time="2026-04-16T04:44:18.877283547Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 16 04:44:22.854509 kubelet[3319]: E0416 04:44:22.819508 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:22.854509 kubelet[3319]: E0416 04:44:22.847394 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:44:26.302527 kubelet[3319]: E0416 04:44:26.298935 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.338s" Apr 16 04:44:31.156444 kubelet[3319]: E0416 04:44:31.149086 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:36.890345 containerd[1577]: time="2026-04-16T04:44:36.874393441Z" level=info msg="Kill container \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\"" Apr 16 04:44:40.469502 kubelet[3319]: E0416 04:44:40.462274 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:40.785375 kubelet[3319]: E0416 04:44:40.568073 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.172s" Apr 16 04:44:48.281093 kubelet[3319]: E0416 04:44:48.242345 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:53.820475 kubelet[3319]: E0416 04:44:53.814441 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.034s" Apr 16 04:44:57.565494 kubelet[3319]: E0416 04:44:57.560378 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:02.904144 kubelet[3319]: E0416 04:45:02.877525 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.052s" Apr 16 04:45:05.180294 kubelet[3319]: E0416 04:45:05.179153 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:05.983872 kubelet[3319]: E0416 04:45:05.915597 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.113s" Apr 16 04:45:08.757443 kubelet[3319]: E0416 04:45:08.715638 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.677s" Apr 16 04:45:10.503699 kubelet[3319]: E0416 04:45:10.502058 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.49s" Apr 16 04:45:11.240224 kubelet[3319]: E0416 04:45:11.164311 3319 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" Apr 16 04:45:11.300498 kubelet[3319]: E0416 04:45:11.022567 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:11.300498 kubelet[3319]: E0416 04:45:11.361274 3319 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a" gracePeriod=30 Apr 16 04:45:11.300498 kubelet[3319]: E0416 04:45:11.366801 3319 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a"} pod="kube-system/kube-scheduler-localhost" Apr 16 04:45:11.300498 kubelet[3319]: E0416 04:45:11.367357 3319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 16 04:45:11.472101 containerd[1577]: time="2026-04-16T04:45:11.305033022Z" level=error msg="StopContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" to be killed: wait container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\": context canceled" Apr 16 04:45:14.778536 kubelet[3319]: E0416 04:45:14.775572 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.411s" Apr 16 04:45:16.653233 containerd[1577]: time="2026-04-16T04:45:16.651199458Z" level=info msg="StopContainer for \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" with timeout 30 (s)" Apr 16 04:45:17.170328 containerd[1577]: time="2026-04-16T04:45:17.075696079Z" level=info msg="Skipping the sending of signal terminated to container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\" because a prior stop with timeout>0 request already sent the signal" Apr 16 04:45:18.722252 kubelet[3319]: E0416 04:45:18.605566 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:21.270069 kubelet[3319]: E0416 04:45:21.265365 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.489s" Apr 16 04:45:22.815658 containerd[1577]: time="2026-04-16T04:45:22.798093894Z" level=info msg="TaskExit event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439}" Apr 16 04:45:25.983205 kubelet[3319]: E0416 04:45:25.978441 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.556s" Apr 16 04:45:26.435069 kubelet[3319]: E0416 04:45:26.420819 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:32.938319 containerd[1577]: time="2026-04-16T04:45:32.900659711Z" level=error msg="ttrpc: received message on inactive stream" stream=113 Apr 16 04:45:33.314118 containerd[1577]: time="2026-04-16T04:45:33.281094676Z" level=error msg="get state for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="context deadline exceeded: unknown" Apr 16 04:45:33.484711 containerd[1577]: time="2026-04-16T04:45:33.296375067Z" level=warning msg="unknown status" status=0 Apr 16 04:45:34.175701 containerd[1577]: time="2026-04-16T04:45:34.172314589Z" level=error msg="ttrpc: received message on inactive stream" stream=115 Apr 16 04:45:34.672602 containerd[1577]: time="2026-04-16T04:45:34.312089617Z" level=error msg="Failed to handle backOff event container_id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" id:\"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" pid:3420 exit_status:1 exited_at:{seconds:1776314514 nanos:461437439} for 44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 16 04:45:34.886605 kubelet[3319]: E0416 04:45:34.876431 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:36.110998 kubelet[3319]: E0416 04:45:36.110333 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.128s" Apr 16 04:45:37.642352 kubelet[3319]: E0416 04:45:37.642096 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.46s" Apr 16 04:45:39.445082 kubelet[3319]: E0416 04:45:39.422217 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.481s" Apr 16 04:45:41.118963 kubelet[3319]: E0416 04:45:41.056520 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:43.683768 kubelet[3319]: E0416 04:45:43.682305 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.196s" Apr 16 04:45:47.240305 containerd[1577]: time="2026-04-16T04:45:47.236977914Z" level=info msg="Kill container \"7245a595013ffc120bfe81aafc28a88dc168f7bb2e2f25f2204094be465bf80a\"" Apr 16 04:45:48.045089 kubelet[3319]: E0416 04:45:47.991245 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:50.802581 kubelet[3319]: E0416 04:45:50.792197 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.744s" Apr 16 04:45:56.779231 kubelet[3319]: E0416 04:45:56.715633 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:02.378441 kubelet[3319]: E0416 04:46:02.378096 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.468s" Apr 16 04:46:03.951275 kubelet[3319]: E0416 04:46:03.927109 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:08.769664 kubelet[3319]: E0416 04:46:08.767354 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.347s" Apr 16 04:46:10.082197 kubelet[3319]: E0416 04:46:10.078175 3319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:46:10.905517 kubelet[3319]: E0416 04:46:10.701507 3319 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" Apr 16 04:46:11.557261 kubelet[3319]: E0416 04:46:11.554092 3319 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" containerName="kube-controller-manager" containerID="containerd://44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b" gracePeriod=30 Apr 16 04:46:12.020405 kubelet[3319]: E0416 04:46:11.981563 3319 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b"} pod="kube-system/kube-controller-manager-localhost" Apr 16 04:46:12.484965 kubelet[3319]: E0416 04:46:12.081075 3319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 16 04:46:12.598614 containerd[1577]: time="2026-04-16T04:46:12.567543230Z" level=error msg="StopContainer for \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" to be killed: wait container \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\": context canceled" Apr 16 04:46:14.868490 kubelet[3319]: E0416 04:46:14.867632 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:22.514489 containerd[1577]: time="2026-04-16T04:46:22.510578970Z" level=info msg="StopContainer for \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" with timeout 30 (s)" Apr 16 04:46:23.651047 kubelet[3319]: E0416 04:46:23.646863 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:24.460391 containerd[1577]: time="2026-04-16T04:46:23.912373413Z" level=info msg="Skipping the sending of signal terminated to container \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\" because a prior stop with timeout>0 request already sent the signal" Apr 16 04:46:25.869637 kubelet[3319]: E0416 04:46:25.590560 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.316s" Apr 16 04:46:32.091216 kubelet[3319]: E0416 04:46:32.079593 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:38.965876 kubelet[3319]: E0416 04:46:38.963628 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.097s" Apr 16 04:46:39.736566 kubelet[3319]: E0416 04:46:39.258250 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:42.542705 kubelet[3319]: E0416 04:46:42.517355 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.212s" Apr 16 04:46:48.374922 kubelet[3319]: E0416 04:46:48.373545 3319 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:53.619572 kubelet[3319]: E0416 04:46:53.611564 3319 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.082s" Apr 16 04:46:54.099629 containerd[1577]: time="2026-04-16T04:46:54.016025930Z" level=info msg="Kill container \"44504b1134419989b162aeae12f852d1bc92281e5704f31bf28c48bb0ce7c30b\"" Apr 16 04:46:54.211972 sudo[1786]: pam_unix(sudo:session): session closed for user root Apr 16 04:46:55.012222 sshd[1778]: pam_unix(sshd:session): session closed for user core