Apr 17 00:14:44.932807 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 17 00:14:44.932848 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:14:44.932864 kernel: BIOS-provided physical RAM map: Apr 17 00:14:44.932871 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 00:14:44.932879 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 00:14:44.932886 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 00:14:44.932895 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 00:14:44.932903 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 00:14:44.933934 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 00:14:44.933953 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 00:14:44.933962 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 00:14:44.933997 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 00:14:44.934004 kernel: NX (Execute Disable) protection: active Apr 17 00:14:44.934012 kernel: APIC: Static calls initialized Apr 17 00:14:44.934021 kernel: SMBIOS 2.8 present. Apr 17 00:14:44.934029 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 00:14:44.934115 kernel: DMI: Memory slots populated: 1/1 Apr 17 00:14:44.934124 kernel: Hypervisor detected: KVM Apr 17 00:14:44.934268 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 00:14:44.934277 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 00:14:44.934285 kernel: kvm-clock: using sched offset of 24728720738 cycles Apr 17 00:14:44.934296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 00:14:44.934304 kernel: tsc: Detected 2793.438 MHz processor Apr 17 00:14:44.934313 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 00:14:44.934321 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 00:14:44.934329 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 00:14:44.934920 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 00:14:44.934930 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 00:14:44.934939 kernel: Using GB pages for direct mapping Apr 17 00:14:44.934947 kernel: ACPI: Early table checksum verification disabled Apr 17 00:14:44.934955 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 00:14:44.934964 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.934972 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.934981 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.934989 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 00:14:44.935094 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.935102 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.935111 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.935119 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:14:44.935256 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 00:14:44.935331 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 00:14:44.935402 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 00:14:44.935412 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 00:14:44.935421 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 00:14:44.935430 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 00:14:44.935439 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 00:14:44.935448 kernel: No NUMA configuration found Apr 17 00:14:44.935456 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 00:14:44.935464 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 17 00:14:44.935599 kernel: Zone ranges: Apr 17 00:14:44.935608 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 00:14:44.935617 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 00:14:44.935625 kernel: Normal empty Apr 17 00:14:44.935633 kernel: Device empty Apr 17 00:14:44.935641 kernel: Movable zone start for each node Apr 17 00:14:44.935650 kernel: Early memory node ranges Apr 17 00:14:44.935658 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 00:14:44.935666 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 00:14:44.935677 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 00:14:44.935686 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 00:14:44.935694 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 00:14:44.935702 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 00:14:44.935772 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 00:14:44.935782 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 00:14:44.935791 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 00:14:44.935799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 00:14:44.935808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 00:14:44.935879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 00:14:44.935888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 00:14:44.935897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 00:14:44.935906 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 00:14:44.935914 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 00:14:44.935923 kernel: TSC deadline timer available Apr 17 00:14:44.935932 kernel: CPU topo: Max. logical packages: 1 Apr 17 00:14:44.935941 kernel: CPU topo: Max. logical dies: 1 Apr 17 00:14:44.935949 kernel: CPU topo: Max. dies per package: 1 Apr 17 00:14:44.935957 kernel: CPU topo: Max. threads per core: 1 Apr 17 00:14:44.935968 kernel: CPU topo: Num. cores per package: 4 Apr 17 00:14:44.935977 kernel: CPU topo: Num. threads per package: 4 Apr 17 00:14:44.935986 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 17 00:14:44.935995 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 00:14:44.936004 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 00:14:44.936012 kernel: kvm-guest: setup PV sched yield Apr 17 00:14:44.936021 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 00:14:44.936030 kernel: Booting paravirtualized kernel on KVM Apr 17 00:14:44.936039 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 00:14:44.936050 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 00:14:44.936059 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 17 00:14:44.936068 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 17 00:14:44.936076 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 00:14:44.936086 kernel: kvm-guest: PV spinlocks enabled Apr 17 00:14:44.936095 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 00:14:44.936105 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:14:44.936114 kernel: random: crng init done Apr 17 00:14:44.937009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 00:14:44.937021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 00:14:44.937030 kernel: Fallback order for Node 0: 0 Apr 17 00:14:44.937039 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 17 00:14:44.937048 kernel: Policy zone: DMA32 Apr 17 00:14:44.937057 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 00:14:44.937065 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 00:14:44.937074 kernel: ftrace: allocating 40126 entries in 157 pages Apr 17 00:14:44.937083 kernel: ftrace: allocated 157 pages with 5 groups Apr 17 00:14:44.937884 kernel: Dynamic Preempt: voluntary Apr 17 00:14:44.937895 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 00:14:44.937905 kernel: rcu: RCU event tracing is enabled. Apr 17 00:14:44.937914 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 00:14:44.937923 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 00:14:44.937932 kernel: Rude variant of Tasks RCU enabled. Apr 17 00:14:44.938021 kernel: Tracing variant of Tasks RCU enabled. Apr 17 00:14:44.938031 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 00:14:44.938041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 00:14:44.938858 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 00:14:44.938872 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 00:14:44.938882 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 00:14:44.938891 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 00:14:44.938901 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 00:14:44.938912 kernel: Console: colour VGA+ 80x25 Apr 17 00:14:44.954089 kernel: printk: legacy console [ttyS0] enabled Apr 17 00:14:44.954452 kernel: ACPI: Core revision 20240827 Apr 17 00:14:44.954470 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 00:14:44.954487 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 00:14:44.954651 kernel: x2apic enabled Apr 17 00:14:44.954661 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 00:14:44.954674 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 00:14:44.954781 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 00:14:44.954791 kernel: kvm-guest: setup PV IPIs Apr 17 00:14:44.954801 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 00:14:44.954811 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 00:14:44.954972 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 00:14:44.954981 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 00:14:44.954991 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 00:14:44.955001 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 00:14:44.955010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 00:14:44.955020 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 00:14:44.955029 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 00:14:44.955039 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 00:14:44.955394 kernel: RETBleed: Vulnerable Apr 17 00:14:44.955404 kernel: Speculative Store Bypass: Vulnerable Apr 17 00:14:44.955414 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 00:14:44.955424 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 00:14:44.955433 kernel: active return thunk: its_return_thunk Apr 17 00:14:44.955443 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 00:14:44.955453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 00:14:44.955464 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 00:14:44.955641 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 00:14:44.955788 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 00:14:44.955800 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 00:14:44.955811 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 00:14:44.955819 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 00:14:44.955828 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 00:14:44.955837 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 00:14:44.955845 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 00:14:44.955854 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 00:14:44.955862 kernel: Freeing SMP alternatives memory: 32K Apr 17 00:14:44.956113 kernel: pid_max: default: 32768 minimum: 301 Apr 17 00:14:44.956123 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 00:14:44.956343 kernel: landlock: Up and running. Apr 17 00:14:44.956352 kernel: SELinux: Initializing. Apr 17 00:14:44.956361 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 00:14:44.956370 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 00:14:44.956379 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 00:14:44.956389 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 00:14:44.956399 kernel: signal: max sigframe size: 3632 Apr 17 00:14:44.956413 kernel: rcu: Hierarchical SRCU implementation. Apr 17 00:14:44.956424 kernel: rcu: Max phase no-delay instances is 400. Apr 17 00:14:44.956433 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 00:14:44.956443 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 00:14:44.956453 kernel: smp: Bringing up secondary CPUs ... Apr 17 00:14:44.956463 kernel: smpboot: x86: Booting SMP configuration: Apr 17 00:14:44.956472 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 00:14:44.956480 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 00:14:44.956488 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 00:14:44.956771 kernel: Memory: 2419752K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 146112K reserved, 0K cma-reserved) Apr 17 00:14:44.956782 kernel: devtmpfs: initialized Apr 17 00:14:44.956791 kernel: x86/mm: Memory block size: 128MB Apr 17 00:14:44.956800 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 00:14:44.956809 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 00:14:44.956818 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 00:14:44.956828 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 00:14:44.956837 kernel: audit: initializing netlink subsys (disabled) Apr 17 00:14:44.956846 kernel: audit: type=2000 audit(1776384868.587:1): state=initialized audit_enabled=0 res=1 Apr 17 00:14:44.956990 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 00:14:44.956999 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 00:14:44.957008 kernel: cpuidle: using governor menu Apr 17 00:14:44.957017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 00:14:44.957027 kernel: dca service started, version 1.12.1 Apr 17 00:14:44.957036 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 17 00:14:44.957046 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 00:14:44.957055 kernel: PCI: Using configuration type 1 for base access Apr 17 00:14:44.957064 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 00:14:44.957077 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 00:14:44.957085 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 00:14:44.957094 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 00:14:44.957103 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 00:14:44.957112 kernel: ACPI: Added _OSI(Module Device) Apr 17 00:14:44.957121 kernel: ACPI: Added _OSI(Processor Device) Apr 17 00:14:44.978456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 00:14:44.978477 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 00:14:44.978487 kernel: ACPI: Interpreter enabled Apr 17 00:14:44.978602 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 00:14:44.978611 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 00:14:44.978621 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 00:14:44.978631 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 00:14:44.978640 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 00:14:44.978649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 00:14:44.982647 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 00:14:44.982760 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 00:14:44.982938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 00:14:44.982951 kernel: PCI host bridge to bus 0000:00 Apr 17 00:14:44.984306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 00:14:44.984405 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 00:14:44.984486 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 00:14:44.984637 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 00:14:44.984708 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 00:14:44.984861 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 00:14:44.984933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 00:14:44.986712 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 17 00:14:44.987075 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 00:14:44.988304 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 17 00:14:44.988410 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 17 00:14:44.988578 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 17 00:14:44.988670 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 00:14:44.988752 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 12695 usecs Apr 17 00:14:44.990404 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 17 00:14:44.990589 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 17 00:14:44.990677 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 17 00:14:44.990768 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 00:14:44.991047 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 17 00:14:44.991287 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 17 00:14:44.991388 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 17 00:14:44.991625 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 00:14:44.991876 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 17 00:14:44.991973 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 17 00:14:44.992076 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 17 00:14:44.993051 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 00:14:44.993319 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 17 00:14:44.994465 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 17 00:14:44.994638 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 00:14:44.994723 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 12695 usecs Apr 17 00:14:44.994954 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 17 00:14:44.995116 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 17 00:14:44.996796 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 17 00:14:44.998024 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 17 00:14:44.998259 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 17 00:14:44.998277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 00:14:44.998288 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 00:14:44.998298 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 00:14:44.998308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 00:14:44.998974 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 00:14:44.998986 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 00:14:44.998996 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 00:14:44.999006 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 00:14:44.999016 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 00:14:44.999025 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 00:14:44.999035 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 00:14:44.999045 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 00:14:44.999055 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 00:14:44.999098 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 00:14:44.999108 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 00:14:44.999118 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 00:14:44.999284 kernel: iommu: Default domain type: Translated Apr 17 00:14:44.999296 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 00:14:44.999306 kernel: PCI: Using ACPI for IRQ routing Apr 17 00:14:44.999316 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 00:14:44.999326 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 00:14:44.999336 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 00:14:44.999700 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 00:14:44.999790 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 00:14:44.999870 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 00:14:44.999882 kernel: vgaarb: loaded Apr 17 00:14:44.999891 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 00:14:44.999901 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 00:14:44.999911 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 00:14:44.999921 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 00:14:44.999932 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 00:14:44.999947 kernel: pnp: PnP ACPI init Apr 17 00:14:45.003477 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 00:14:45.003565 kernel: pnp: PnP ACPI: found 6 devices Apr 17 00:14:45.003577 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 00:14:45.003586 kernel: NET: Registered PF_INET protocol family Apr 17 00:14:45.003596 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 00:14:45.003606 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 00:14:45.003615 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 00:14:45.003646 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 00:14:45.003656 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 00:14:45.003665 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 00:14:45.003674 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 00:14:45.003683 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 00:14:45.003692 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 00:14:45.003701 kernel: NET: Registered PF_XDP protocol family Apr 17 00:14:45.003805 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 00:14:45.003888 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 00:14:45.004097 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 00:14:45.004349 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 00:14:45.004435 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 00:14:45.005962 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 00:14:45.005984 kernel: PCI: CLS 0 bytes, default 64 Apr 17 00:14:45.005994 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 00:14:45.006006 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 00:14:45.006016 kernel: Initialise system trusted keyrings Apr 17 00:14:45.006032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 00:14:45.006042 kernel: Key type asymmetric registered Apr 17 00:14:45.006051 kernel: Asymmetric key parser 'x509' registered Apr 17 00:14:45.006060 kernel: hrtimer: interrupt took 8360709 ns Apr 17 00:14:45.006070 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 00:14:45.006080 kernel: io scheduler mq-deadline registered Apr 17 00:14:45.006089 kernel: io scheduler kyber registered Apr 17 00:14:45.006099 kernel: io scheduler bfq registered Apr 17 00:14:45.006108 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 00:14:45.006121 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 00:14:45.007436 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 00:14:45.007449 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 00:14:45.007459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 00:14:45.007469 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 00:14:45.007478 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 00:14:45.007486 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 00:14:45.007561 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 00:14:45.008984 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 00:14:45.009018 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 00:14:45.009122 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 00:14:45.009331 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T00:14:41 UTC (1776384881) Apr 17 00:14:45.009391 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 00:14:45.009400 kernel: intel_pstate: CPU model not supported Apr 17 00:14:45.009409 kernel: NET: Registered PF_INET6 protocol family Apr 17 00:14:45.009419 kernel: Segment Routing with IPv6 Apr 17 00:14:45.009481 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 00:14:45.009490 kernel: NET: Registered PF_PACKET protocol family Apr 17 00:14:45.010108 kernel: Key type dns_resolver registered Apr 17 00:14:45.010118 kernel: IPI shorthand broadcast: enabled Apr 17 00:14:45.010272 kernel: sched_clock: Marking stable (12968049411, 558026325)->(14618228760, -1092153024) Apr 17 00:14:45.010282 kernel: registered taskstats version 1 Apr 17 00:14:45.010288 kernel: Loading compiled-in X.509 certificates Apr 17 00:14:45.010294 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 17 00:14:45.010301 kernel: Demotion targets for Node 0: null Apr 17 00:14:45.010307 kernel: Key type .fscrypt registered Apr 17 00:14:45.010394 kernel: Key type fscrypt-provisioning registered Apr 17 00:14:45.010404 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 00:14:45.010414 kernel: ima: Allocated hash algorithm: sha1 Apr 17 00:14:45.010423 kernel: ima: No architecture policies found Apr 17 00:14:45.010432 kernel: clk: Disabling unused clocks Apr 17 00:14:45.010440 kernel: Warning: unable to open an initial console. Apr 17 00:14:45.010448 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 17 00:14:45.010457 kernel: Write protecting the kernel read-only data: 40960k Apr 17 00:14:45.010467 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 17 00:14:45.010612 kernel: Run /init as init process Apr 17 00:14:45.010621 kernel: with arguments: Apr 17 00:14:45.010630 kernel: /init Apr 17 00:14:45.010640 kernel: with environment: Apr 17 00:14:45.010649 kernel: HOME=/ Apr 17 00:14:45.010657 kernel: TERM=linux Apr 17 00:14:45.010667 systemd[1]: Successfully made /usr/ read-only. Apr 17 00:14:45.011595 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 00:14:45.011677 systemd[1]: Detected virtualization kvm. Apr 17 00:14:45.011688 systemd[1]: Detected architecture x86-64. Apr 17 00:14:45.011699 systemd[1]: Running in initrd. Apr 17 00:14:45.011710 systemd[1]: No hostname configured, using default hostname. Apr 17 00:14:45.011723 systemd[1]: Hostname set to . Apr 17 00:14:45.011799 systemd[1]: Initializing machine ID from VM UUID. Apr 17 00:14:45.011811 systemd[1]: Queued start job for default target initrd.target. Apr 17 00:14:45.011823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:14:45.011835 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:14:45.011847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 00:14:45.011858 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 00:14:45.011870 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 00:14:45.011882 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 00:14:45.011958 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 00:14:45.011970 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 00:14:45.011980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:14:45.011991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:14:45.012001 systemd[1]: Reached target paths.target - Path Units. Apr 17 00:14:45.012013 systemd[1]: Reached target slices.target - Slice Units. Apr 17 00:14:45.012023 systemd[1]: Reached target swap.target - Swaps. Apr 17 00:14:45.012033 systemd[1]: Reached target timers.target - Timer Units. Apr 17 00:14:45.012110 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 00:14:45.012121 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 00:14:45.012250 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 00:14:45.012262 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 00:14:45.012274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:14:45.012285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 00:14:45.012296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:14:45.012369 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 00:14:45.012380 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 00:14:45.012392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 00:14:45.012403 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 00:14:45.012415 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 00:14:45.012427 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 00:14:45.012560 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 00:14:45.012575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 00:14:45.012587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:14:45.012598 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 00:14:45.012722 systemd-journald[201]: Collecting audit messages is disabled. Apr 17 00:14:45.012812 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:14:45.012824 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 00:14:45.012836 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 00:14:45.012848 systemd-journald[201]: Journal started Apr 17 00:14:45.012930 systemd-journald[201]: Runtime Journal (/run/log/journal/8cf2f0c99d2f45a1ba352dcea47c6be1) is 6M, max 48.2M, 42.2M free. Apr 17 00:14:45.043806 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 00:14:45.043449 systemd-modules-load[204]: Inserted module 'overlay' Apr 17 00:14:45.098647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 00:14:45.151773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:14:45.155865 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 00:14:45.156384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 00:14:45.199598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:14:45.329471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 00:14:45.335334 kernel: Bridge firewalling registered Apr 17 00:14:45.335630 systemd-modules-load[204]: Inserted module 'br_netfilter' Apr 17 00:14:45.337907 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 00:14:46.232594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:14:46.306361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 00:14:46.335441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:14:46.439583 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 00:14:46.561399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:14:46.583966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 00:14:46.584702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 00:14:46.616730 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 00:14:46.768609 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:14:46.801820 systemd-resolved[243]: Positive Trust Anchors: Apr 17 00:14:46.801830 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 00:14:46.801863 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 00:14:46.825000 systemd-resolved[243]: Defaulting to hostname 'linux'. Apr 17 00:14:46.900612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 00:14:47.002991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:14:47.990355 kernel: SCSI subsystem initialized Apr 17 00:14:48.049845 kernel: Loading iSCSI transport class v2.0-870. Apr 17 00:14:48.097500 kernel: iscsi: registered transport (tcp) Apr 17 00:14:48.285852 kernel: iscsi: registered transport (qla4xxx) Apr 17 00:14:48.285999 kernel: QLogic iSCSI HBA Driver Apr 17 00:14:48.600009 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 00:14:48.835590 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:14:48.906817 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 00:14:49.337810 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 00:14:49.370377 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 00:14:49.788493 kernel: raid6: avx512x4 gen() 17512 MB/s Apr 17 00:14:49.810894 kernel: raid6: avx512x2 gen() 16986 MB/s Apr 17 00:14:49.831734 kernel: raid6: avx512x1 gen() 31762 MB/s Apr 17 00:14:49.897969 kernel: raid6: avx2x4 gen() 4845 MB/s Apr 17 00:14:49.917359 kernel: raid6: avx2x2 gen() 13240 MB/s Apr 17 00:14:49.945003 kernel: raid6: avx2x1 gen() 21098 MB/s Apr 17 00:14:49.945334 kernel: raid6: using algorithm avx512x1 gen() 31762 MB/s Apr 17 00:14:49.973059 kernel: raid6: .... xor() 16915 MB/s, rmw enabled Apr 17 00:14:49.973890 kernel: raid6: using avx512x2 recovery algorithm Apr 17 00:14:50.077678 kernel: xor: automatically using best checksumming function avx Apr 17 00:14:51.393476 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 00:14:51.438405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 00:14:51.448923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:14:51.623899 systemd-udevd[454]: Using default interface naming scheme 'v255'. Apr 17 00:14:51.654865 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:14:51.671990 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 00:14:51.875685 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Apr 17 00:14:52.243663 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 00:14:52.334014 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 00:14:53.203511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:14:53.232521 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 00:14:53.410382 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 00:14:53.421855 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 00:14:53.593305 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 17 00:14:53.644493 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 00:14:53.645671 kernel: AES CTR mode by8 optimization enabled Apr 17 00:14:53.668691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 00:14:53.668811 kernel: GPT:9289727 != 19775487 Apr 17 00:14:53.668826 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 00:14:53.676448 kernel: GPT:9289727 != 19775487 Apr 17 00:14:53.676998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 00:14:53.785995 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 00:14:53.786074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 00:14:53.677456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:14:53.726880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:14:53.815028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:14:53.905120 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:14:53.989089 kernel: libata version 3.00 loaded. Apr 17 00:14:54.076054 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 00:14:54.096879 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 00:14:54.137113 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 17 00:14:54.137451 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 17 00:14:54.138821 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 00:14:54.177647 kernel: scsi host0: ahci Apr 17 00:14:54.222727 kernel: scsi host1: ahci Apr 17 00:14:54.229446 kernel: scsi host2: ahci Apr 17 00:14:54.235991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 00:14:56.110310 kernel: scsi host3: ahci Apr 17 00:14:56.117881 kernel: scsi host4: ahci Apr 17 00:14:56.118301 kernel: scsi host5: ahci Apr 17 00:14:56.119677 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 17 00:14:56.119748 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 17 00:14:56.119761 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 17 00:14:56.119773 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 17 00:14:56.119786 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 17 00:14:56.119799 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 17 00:14:56.119812 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 00:14:56.119824 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 00:14:56.119908 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 00:14:56.119922 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 00:14:56.119935 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 00:14:56.119946 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 00:14:56.119958 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 00:14:56.120033 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 00:14:56.120106 kernel: ata3.00: applying bridge limits Apr 17 00:14:56.120119 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 00:14:56.120247 kernel: ata3.00: configured for UDMA/100 Apr 17 00:14:56.120262 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 00:14:56.121436 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 00:14:56.126388 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 00:14:56.126408 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 00:14:56.184444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:14:56.348318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 00:14:56.381362 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 00:14:56.521098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 00:14:56.550297 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 00:14:56.564382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 00:14:56.630894 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 00:14:56.738047 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:14:56.777744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 00:14:56.901901 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 00:14:56.943996 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 00:14:57.136426 disk-uuid[654]: Primary Header is updated. Apr 17 00:14:57.136426 disk-uuid[654]: Secondary Entries is updated. Apr 17 00:14:57.136426 disk-uuid[654]: Secondary Header is updated. Apr 17 00:14:57.183484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 00:14:57.240975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 00:14:58.264713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 00:14:58.276727 disk-uuid[655]: The operation has completed successfully. Apr 17 00:14:58.573569 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 00:14:58.579363 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 00:14:59.333952 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 00:14:59.682116 sh[673]: Success Apr 17 00:15:00.092373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 00:15:00.092665 kernel: device-mapper: uevent: version 1.0.3 Apr 17 00:15:00.117468 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 00:15:00.276410 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 00:15:01.827687 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 00:15:01.943999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 00:15:02.086407 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 00:15:02.457540 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (685) Apr 17 00:15:02.482925 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 17 00:15:02.488478 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:15:02.832424 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 00:15:02.833431 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 00:15:02.984548 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 00:15:03.077537 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 00:15:03.114091 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 00:15:03.151597 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 00:15:03.198866 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 00:15:03.791982 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (716) Apr 17 00:15:03.821483 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:15:03.821772 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:15:03.964482 kernel: BTRFS info (device vda6): turning on async discard Apr 17 00:15:03.969328 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 00:15:04.126391 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:15:04.160383 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 00:15:04.224578 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 00:15:07.168954 ignition[751]: Ignition 2.22.0 Apr 17 00:15:07.169087 ignition[751]: Stage: fetch-offline Apr 17 00:15:07.170452 ignition[751]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:15:07.170515 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 00:15:07.179930 ignition[751]: parsed url from cmdline: "" Apr 17 00:15:07.184360 ignition[751]: no config URL provided Apr 17 00:15:07.184392 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 00:15:07.184515 ignition[751]: no config at "/usr/lib/ignition/user.ign" Apr 17 00:15:07.188601 ignition[751]: op(1): [started] loading QEMU firmware config module Apr 17 00:15:07.188610 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 00:15:07.307545 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 00:15:07.430501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 00:15:07.464896 ignition[751]: op(1): [finished] loading QEMU firmware config module Apr 17 00:15:08.099574 systemd-networkd[862]: lo: Link UP Apr 17 00:15:08.100612 systemd-networkd[862]: lo: Gained carrier Apr 17 00:15:08.118380 systemd-networkd[862]: Enumeration completed Apr 17 00:15:08.126347 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 00:15:08.173996 systemd[1]: Reached target network.target - Network. Apr 17 00:15:08.186369 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:15:08.212406 ignition[751]: parsing config with SHA512: ee1a4d72468c9cee789771062e3ef08b9011092aefb3eb73584bc478127749fc8eef91a1b1c8eaef8ca47169f1efb7d89c1a148c4bc4113ccea1ccea3ce9bd73 Apr 17 00:15:08.186373 systemd-networkd[862]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 00:15:08.199513 systemd-networkd[862]: eth0: Link UP Apr 17 00:15:08.200441 systemd-networkd[862]: eth0: Gained carrier Apr 17 00:15:08.200454 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:15:08.300989 systemd-networkd[862]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 00:15:08.709054 unknown[751]: fetched base config from "system" Apr 17 00:15:08.709710 unknown[751]: fetched user config from "qemu" Apr 17 00:15:08.711360 ignition[751]: fetch-offline: fetch-offline passed Apr 17 00:15:08.711789 ignition[751]: Ignition finished successfully Apr 17 00:15:08.812484 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 00:15:08.870096 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 00:15:09.086410 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 00:15:09.734639 systemd-networkd[862]: eth0: Gained IPv6LL Apr 17 00:15:11.209078 ignition[867]: Ignition 2.22.0 Apr 17 00:15:11.210524 ignition[867]: Stage: kargs Apr 17 00:15:11.210992 ignition[867]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:15:11.211001 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 00:15:11.217953 ignition[867]: kargs: kargs passed Apr 17 00:15:11.307272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 00:15:11.220945 ignition[867]: Ignition finished successfully Apr 17 00:15:11.509404 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 00:15:12.529950 ignition[875]: Ignition 2.22.0 Apr 17 00:15:12.530404 ignition[875]: Stage: disks Apr 17 00:15:12.589656 ignition[875]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:15:12.597859 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 00:15:12.654536 ignition[875]: disks: disks passed Apr 17 00:15:12.655555 ignition[875]: Ignition finished successfully Apr 17 00:15:12.691645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 00:15:12.712542 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 00:15:12.774319 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 00:15:12.787965 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 00:15:12.816951 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 00:15:12.842021 systemd[1]: Reached target basic.target - Basic System. Apr 17 00:15:12.885040 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 00:15:13.534956 systemd-fsck[885]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 17 00:15:13.580379 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 00:15:13.629847 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 00:15:16.378047 kernel: EXT4-fs (vda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 17 00:15:16.384287 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 00:15:16.436077 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 00:15:16.589057 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 00:15:16.617077 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 00:15:16.646101 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 00:15:16.646479 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 00:15:16.646545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 00:15:16.823041 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Apr 17 00:15:16.853648 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:15:16.857026 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:15:16.899347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 00:15:16.930389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 00:15:16.944855 kernel: BTRFS info (device vda6): turning on async discard Apr 17 00:15:16.945016 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 00:15:16.968051 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 00:15:18.001401 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 00:15:18.162371 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Apr 17 00:15:18.367537 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 00:15:18.603091 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 00:15:24.412609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 00:15:24.531218 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 00:15:24.579709 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 00:15:24.746253 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:15:24.746233 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 00:15:24.932374 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 00:15:25.359339 ignition[1009]: INFO : Ignition 2.22.0 Apr 17 00:15:25.365539 ignition[1009]: INFO : Stage: mount Apr 17 00:15:25.365539 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:15:25.365539 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 00:15:25.381832 ignition[1009]: INFO : mount: mount passed Apr 17 00:15:25.381832 ignition[1009]: INFO : Ignition finished successfully Apr 17 00:15:25.387540 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 00:15:25.454358 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 00:15:25.783720 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 00:15:26.180776 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Apr 17 00:15:26.196658 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:15:26.202056 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:15:26.299480 kernel: BTRFS info (device vda6): turning on async discard Apr 17 00:15:26.300420 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 00:15:26.317914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 00:15:26.717581 ignition[1039]: INFO : Ignition 2.22.0 Apr 17 00:15:26.717581 ignition[1039]: INFO : Stage: files Apr 17 00:15:26.737043 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:15:26.746932 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 00:15:26.767075 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Apr 17 00:15:26.780917 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 00:15:26.780917 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 00:15:26.834668 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 00:15:26.841548 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 00:15:26.887447 unknown[1039]: wrote ssh authorized keys file for user: core Apr 17 00:15:26.898267 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 00:15:26.932877 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 00:15:26.932877 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 00:15:27.390090 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 00:15:30.606904 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 00:15:30.623297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 00:15:30.655811 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 00:15:30.655811 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 00:15:30.680716 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 00:15:30.680716 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 00:15:30.705618 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 00:15:30.717721 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 00:15:30.732608 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 00:15:30.758423 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 00:15:30.769630 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 00:15:30.769630 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:15:30.801091 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:15:30.801091 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:15:30.801091 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 00:15:31.612736 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 00:15:49.526234 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 00:15:49.526234 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 00:15:49.549536 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 00:15:50.422562 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 00:15:50.663053 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 00:15:50.680038 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 00:15:50.680038 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 00:15:50.680038 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 00:15:50.722122 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 00:15:50.722122 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 00:15:50.722122 ignition[1039]: INFO : files: files passed Apr 17 00:15:50.722122 ignition[1039]: INFO : Ignition finished successfully Apr 17 00:15:50.774040 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 00:15:50.830940 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 00:15:50.911689 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 00:15:50.977276 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 00:15:50.987349 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 00:15:51.025607 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 00:15:51.117999 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:15:51.136430 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:15:51.136430 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:15:51.182656 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 00:15:51.306692 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 00:15:51.431827 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 00:15:52.791063 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 00:15:52.800089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 00:15:52.831890 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 00:15:52.863874 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 00:15:52.865785 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 00:15:52.901011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 00:15:53.234739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 00:15:53.305815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 00:15:53.641601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:15:53.654360 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:15:53.663946 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 00:15:53.679612 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 00:15:53.683735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 00:15:53.707282 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 00:15:53.717042 systemd[1]: Stopped target basic.target - Basic System. Apr 17 00:15:53.722204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 00:15:53.819868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 00:15:53.831249 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 00:15:53.840611 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 00:15:53.863069 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 00:15:53.891356 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 00:15:53.911199 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 00:15:53.912986 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 00:15:54.021772 systemd[1]: Stopped target swap.target - Swaps. Apr 17 00:15:54.043345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 00:15:54.047563 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 00:15:54.077088 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:15:54.117884 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:15:54.200805 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 00:15:54.205060 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:15:54.234843 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 00:15:54.236698 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 00:15:54.259487 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 00:15:54.265293 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 00:15:54.283802 systemd[1]: Stopped target paths.target - Path Units. Apr 17 00:15:54.291771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 00:15:54.307325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:15:54.331856 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 00:15:54.411366 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 00:15:54.438874 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 00:15:54.441998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 00:15:54.506120 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 00:15:54.508346 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 00:15:54.590967 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 00:15:54.610289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 00:15:54.657950 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 00:15:54.660246 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 00:15:54.782313 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 00:15:54.803698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 00:15:54.816699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 00:15:54.827038 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:15:54.865361 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 00:15:54.865675 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 00:15:54.910437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 00:15:54.912468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 00:15:55.077520 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 00:15:55.107881 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 00:15:55.108498 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 00:15:55.474951 ignition[1095]: INFO : Ignition 2.22.0 Apr 17 00:15:55.481438 ignition[1095]: INFO : Stage: umount Apr 17 00:15:55.481438 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:15:55.481438 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 00:15:55.510762 ignition[1095]: INFO : umount: umount passed Apr 17 00:15:55.510762 ignition[1095]: INFO : Ignition finished successfully Apr 17 00:15:55.599776 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 00:15:55.604455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 00:15:55.622231 systemd[1]: Stopped target network.target - Network. Apr 17 00:15:55.626444 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 00:15:55.629587 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 00:15:55.649979 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 00:15:55.656991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 00:15:55.668435 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 00:15:55.668543 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 00:15:55.716894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 00:15:55.719754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 00:15:55.806721 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 00:15:55.823661 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 00:15:55.858405 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 00:15:55.867308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 00:15:55.910816 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 00:15:55.911200 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 00:15:55.991883 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 00:15:56.029972 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 00:15:56.030657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 00:15:56.084814 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 00:15:56.116601 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 00:15:56.125797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 00:15:56.128643 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:15:56.184801 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 00:15:56.194724 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 00:15:56.197999 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 00:15:56.213276 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 00:15:56.213407 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:15:56.273907 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 00:15:56.274786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 00:15:56.286874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 00:15:56.290573 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:15:56.335061 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:15:56.405574 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 00:15:56.405719 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:15:56.428819 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 00:15:56.431881 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:15:56.523004 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 00:15:56.523232 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 00:15:56.533381 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 00:15:56.533485 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:15:56.546067 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 00:15:56.546417 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 00:15:56.572341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 00:15:56.572563 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 00:15:56.591572 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 00:15:56.593468 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 00:15:56.669305 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 00:15:56.681833 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 00:15:56.682907 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:15:56.699252 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 00:15:56.700578 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:15:56.730050 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 00:15:56.730528 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:15:56.809475 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 00:15:56.809571 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:15:56.828759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 00:15:56.838492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:15:56.859391 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 17 00:15:56.859487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 17 00:15:56.859513 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 17 00:15:56.859542 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:15:56.859891 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 00:15:56.860048 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 00:15:56.879419 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 00:15:56.879656 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 00:15:56.908681 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 00:15:57.011816 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 00:15:57.201652 systemd[1]: Switching root. Apr 17 00:15:57.400044 systemd-journald[201]: Journal stopped Apr 17 00:16:11.030575 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Apr 17 00:16:11.031935 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 00:16:11.031985 kernel: SELinux: policy capability open_perms=1 Apr 17 00:16:11.031998 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 00:16:11.032058 kernel: SELinux: policy capability always_check_network=0 Apr 17 00:16:11.032108 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 00:16:11.032220 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 00:16:11.032264 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 00:16:11.032275 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 00:16:11.032290 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 00:16:11.032302 kernel: audit: type=1403 audit(1776384958.406:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 00:16:11.032315 systemd[1]: Successfully loaded SELinux policy in 509.214ms. Apr 17 00:16:11.032361 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 84.022ms. Apr 17 00:16:11.032377 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 00:16:11.032424 systemd[1]: Detected virtualization kvm. Apr 17 00:16:11.032437 systemd[1]: Detected architecture x86-64. Apr 17 00:16:11.032448 systemd[1]: Detected first boot. Apr 17 00:16:11.032460 systemd[1]: Initializing machine ID from VM UUID. Apr 17 00:16:11.032472 zram_generator::config[1141]: No configuration found. Apr 17 00:16:11.032485 kernel: Guest personality initialized and is inactive Apr 17 00:16:11.032495 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 00:16:11.032571 kernel: Initialized host personality Apr 17 00:16:11.032615 kernel: NET: Registered PF_VSOCK protocol family Apr 17 00:16:11.032627 systemd[1]: Populated /etc with preset unit settings. Apr 17 00:16:11.032640 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 00:16:11.032653 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 00:16:11.032665 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 00:16:11.032676 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 00:16:11.032689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 00:16:11.032701 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 00:16:11.032713 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 00:16:11.032758 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 00:16:11.032770 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 00:16:11.032783 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 00:16:11.032796 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 00:16:11.032807 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 00:16:11.032823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:16:11.032835 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:16:11.032847 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 00:16:11.035333 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 00:16:11.035416 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 00:16:11.035430 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 00:16:11.035444 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 00:16:11.035464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:16:11.035478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:16:11.035491 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 00:16:11.035504 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 00:16:11.035559 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 00:16:11.035575 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 00:16:11.035621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:16:11.035635 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 00:16:11.035652 systemd[1]: Reached target slices.target - Slice Units. Apr 17 00:16:11.035666 systemd[1]: Reached target swap.target - Swaps. Apr 17 00:16:11.035680 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 00:16:11.035693 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 00:16:11.035710 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 00:16:11.035757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:16:11.035770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 00:16:11.035783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:16:11.035797 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 00:16:11.035810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 00:16:11.035823 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 00:16:11.035836 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 00:16:11.035849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:11.035863 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 00:16:11.039998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 00:16:11.040258 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 00:16:11.040275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 00:16:11.040287 systemd[1]: Reached target machines.target - Containers. Apr 17 00:16:11.040299 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 00:16:11.040311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:16:11.040323 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 00:16:11.040335 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 00:16:11.040347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:16:11.040405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 00:16:11.040414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:16:11.040423 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 00:16:11.040432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:16:11.040441 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 00:16:11.040449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 00:16:11.040461 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 00:16:11.040473 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 00:16:11.046457 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 00:16:11.046604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:16:11.046617 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 00:16:11.046630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 00:16:11.046644 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 00:16:11.046698 kernel: ACPI: bus type drm_connector registered Apr 17 00:16:11.046713 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 00:16:11.046725 kernel: loop: module loaded Apr 17 00:16:11.046736 kernel: fuse: init (API version 7.41) Apr 17 00:16:11.046795 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 00:16:11.046808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 00:16:11.046821 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 00:16:11.046834 systemd[1]: Stopped verity-setup.service. Apr 17 00:16:11.046846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:11.046896 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 00:16:11.046909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 00:16:11.047058 systemd-journald[1218]: Collecting audit messages is disabled. Apr 17 00:16:11.047091 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 00:16:11.048896 systemd-journald[1218]: Journal started Apr 17 00:16:11.049056 systemd-journald[1218]: Runtime Journal (/run/log/journal/8cf2f0c99d2f45a1ba352dcea47c6be1) is 6M, max 48.2M, 42.2M free. Apr 17 00:16:07.958437 systemd[1]: Queued start job for default target multi-user.target. Apr 17 00:16:08.025420 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 00:16:08.055436 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 00:16:08.061432 systemd[1]: systemd-journald.service: Consumed 3.448s CPU time. Apr 17 00:16:11.060362 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 00:16:11.089052 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 00:16:11.113840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 00:16:11.139994 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 00:16:11.216884 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 00:16:11.243747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:16:11.264563 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 00:16:11.271828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 00:16:11.381560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:16:11.387073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:16:11.412757 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 00:16:11.424543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 00:16:11.462690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:16:11.470716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:16:11.533829 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 00:16:11.576912 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 00:16:11.613783 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:16:11.650190 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:16:11.703320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 00:16:11.812486 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:16:11.828857 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 00:16:11.885274 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 00:16:12.228388 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 00:16:12.249308 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 00:16:12.267923 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 00:16:12.279825 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 00:16:12.289482 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 00:16:12.302384 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 00:16:12.429231 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 00:16:12.445846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:16:12.476876 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 00:16:12.515855 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 00:16:12.528744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:16:12.613090 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 00:16:12.620778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 00:16:12.662485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 00:16:12.711950 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 00:16:12.737716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 00:16:12.790435 systemd-journald[1218]: Time spent on flushing to /var/log/journal/8cf2f0c99d2f45a1ba352dcea47c6be1 is 84.825ms for 989 entries. Apr 17 00:16:12.790435 systemd-journald[1218]: System Journal (/var/log/journal/8cf2f0c99d2f45a1ba352dcea47c6be1) is 8M, max 195.6M, 187.6M free. Apr 17 00:16:12.931826 systemd-journald[1218]: Received client request to flush runtime journal. Apr 17 00:16:12.792922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:16:12.805927 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 00:16:12.821317 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 00:16:12.864504 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 00:16:12.892831 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 00:16:12.933011 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 00:16:12.992010 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 00:16:13.131116 kernel: loop0: detected capacity change from 0 to 128560 Apr 17 00:16:13.200221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:16:13.284282 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 00:16:13.285193 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 00:16:13.324463 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 17 00:16:13.324998 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 17 00:16:13.376090 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 00:16:13.422825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:16:13.472898 kernel: loop1: detected capacity change from 0 to 219192 Apr 17 00:16:13.486299 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 00:16:13.711360 kernel: loop2: detected capacity change from 0 to 110984 Apr 17 00:16:13.863989 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 00:16:13.933289 kernel: loop3: detected capacity change from 0 to 128560 Apr 17 00:16:13.939906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 00:16:14.434332 kernel: loop4: detected capacity change from 0 to 219192 Apr 17 00:16:14.493663 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 17 00:16:14.493726 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 17 00:16:14.593281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:16:14.689997 kernel: loop5: detected capacity change from 0 to 110984 Apr 17 00:16:14.889794 (sd-merge)[1284]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 00:16:14.890522 (sd-merge)[1284]: Merged extensions into '/usr'. Apr 17 00:16:14.972281 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 00:16:14.972517 systemd[1]: Reloading... Apr 17 00:16:18.020923 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2874204331 wd_nsec: 2874203397 Apr 17 00:16:19.029196 zram_generator::config[1314]: No configuration found. Apr 17 00:16:20.651615 ldconfig[1255]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 00:16:23.776633 systemd[1]: Reloading finished in 8802 ms. Apr 17 00:16:23.889050 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 00:16:23.914718 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 00:16:24.103323 systemd[1]: Starting ensure-sysext.service... Apr 17 00:16:24.133904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 00:16:24.266038 systemd[1]: Reload requested from client PID 1351 ('systemctl') (unit ensure-sysext.service)... Apr 17 00:16:24.266863 systemd[1]: Reloading... Apr 17 00:16:24.592428 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 00:16:24.592522 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 00:16:24.592903 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 00:16:24.593373 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 00:16:24.625073 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 00:16:24.626458 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Apr 17 00:16:24.626580 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Apr 17 00:16:24.743773 systemd-tmpfiles[1352]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:16:24.743790 systemd-tmpfiles[1352]: Skipping /boot Apr 17 00:16:24.928826 systemd-tmpfiles[1352]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:16:24.928845 systemd-tmpfiles[1352]: Skipping /boot Apr 17 00:16:25.064783 zram_generator::config[1378]: No configuration found. Apr 17 00:16:26.588770 systemd[1]: Reloading finished in 2321 ms. Apr 17 00:16:26.623909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:16:26.855408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 00:16:26.874299 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 00:16:26.887565 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 00:16:26.905330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 00:16:26.935524 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 00:16:26.951306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:26.953768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:16:26.974846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:16:26.991677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:16:27.012486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:16:27.021536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:16:27.021781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:16:27.021887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:27.095835 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 00:16:27.106482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:16:27.106790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:16:27.164271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:27.164650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:16:27.188007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:16:27.205974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:16:27.211052 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:16:27.211403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:27.215797 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 00:16:27.228863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:16:27.229322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:16:27.321401 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:16:27.321750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:16:27.334399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:16:27.334627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:16:27.371054 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 00:16:27.513045 augenrules[1452]: No rules Apr 17 00:16:27.524454 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 00:16:27.536897 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 00:16:27.546007 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 00:16:27.559342 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 00:16:27.673839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:27.677051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:16:27.693613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:16:27.718934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 00:16:27.804401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:16:27.830633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:16:27.874274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:16:27.874936 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:16:27.875314 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 00:16:27.875345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:16:27.885605 systemd[1]: Finished ensure-sysext.service. Apr 17 00:16:27.912461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:16:27.912989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:16:27.937378 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 00:16:27.971823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 00:16:27.987755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:16:27.988308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:16:28.011645 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:16:28.012077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:16:28.029033 systemd-resolved[1419]: Positive Trust Anchors: Apr 17 00:16:28.029089 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 00:16:28.029235 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 00:16:28.041524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:16:28.041644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 00:16:28.106747 systemd-resolved[1419]: Defaulting to hostname 'linux'. Apr 17 00:16:28.106763 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 00:16:28.125511 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 00:16:28.133278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:16:28.274091 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 00:16:28.324351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:16:28.341562 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 00:16:28.388553 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 00:16:28.564086 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 00:16:28.571459 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 00:16:28.580613 systemd-udevd[1475]: Using default interface naming scheme 'v255'. Apr 17 00:16:28.983798 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:16:28.991550 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 00:16:28.999888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 00:16:29.010868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 00:16:29.033825 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 00:16:29.090833 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 00:16:29.103713 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 00:16:29.114529 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 00:16:29.123531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 00:16:29.123709 systemd[1]: Reached target paths.target - Path Units. Apr 17 00:16:29.137087 systemd[1]: Reached target timers.target - Timer Units. Apr 17 00:16:29.235539 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 00:16:29.289325 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 00:16:29.326474 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 00:16:29.340987 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 00:16:29.414404 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 00:16:29.472754 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 00:16:29.482621 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 00:16:29.524100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 00:16:29.533872 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 00:16:29.562597 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 00:16:29.568753 systemd[1]: Reached target basic.target - Basic System. Apr 17 00:16:29.575754 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 00:16:29.575786 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 00:16:29.583494 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 00:16:29.604039 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 00:16:29.632013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 00:16:29.675859 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 00:16:29.689951 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 00:16:29.693713 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 00:16:29.722075 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 00:16:29.737664 jq[1512]: false Apr 17 00:16:29.746297 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 00:16:29.766531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 00:16:29.788440 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 00:16:29.813257 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 00:16:29.832571 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 00:16:29.834638 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 00:16:29.899547 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 00:16:29.918666 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Apr 17 00:16:29.916914 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Apr 17 00:16:29.924746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 00:16:29.988071 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Apr 17 00:16:29.988071 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 00:16:29.934900 oslogin_cache_refresh[1514]: Failure getting users, quitting Apr 17 00:16:30.144766 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Apr 17 00:16:30.144766 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Apr 17 00:16:30.144766 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 00:16:30.193279 extend-filesystems[1513]: Found /dev/vda6 Apr 17 00:16:30.193279 extend-filesystems[1513]: Found /dev/vda9 Apr 17 00:16:30.072570 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 00:16:29.988077 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 00:16:30.432931 extend-filesystems[1513]: Checking size of /dev/vda9 Apr 17 00:16:30.085805 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 00:16:30.014379 oslogin_cache_refresh[1514]: Refreshing group entry cache Apr 17 00:16:30.480891 jq[1526]: true Apr 17 00:16:30.093334 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 00:16:30.020363 oslogin_cache_refresh[1514]: Failure getting groups, quitting Apr 17 00:16:30.094090 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 00:16:30.020378 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 00:16:30.116876 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 00:16:30.181967 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 00:16:30.190642 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 00:16:30.292921 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 00:16:30.295388 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 00:16:30.668754 tar[1535]: linux-amd64/LICENSE Apr 17 00:16:30.670807 extend-filesystems[1513]: Resized partition /dev/vda9 Apr 17 00:16:30.683096 tar[1535]: linux-amd64/helm Apr 17 00:16:30.688089 jq[1537]: true Apr 17 00:16:30.708305 extend-filesystems[1552]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 00:16:30.731245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 00:16:31.180213 update_engine[1524]: I20260417 00:16:31.179008 1524 main.cc:92] Flatcar Update Engine starting Apr 17 00:16:31.189963 systemd-logind[1520]: New seat seat0. Apr 17 00:16:31.308770 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 00:16:31.309038 update_engine[1524]: I20260417 00:16:31.304972 1524 update_check_scheduler.cc:74] Next update check in 5m47s Apr 17 00:16:31.209770 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 00:16:31.238859 dbus-daemon[1510]: [system] SELinux support is enabled Apr 17 00:16:31.239521 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 00:16:31.279520 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 00:16:31.251934 systemd-networkd[1508]: lo: Link UP Apr 17 00:16:31.251939 systemd-networkd[1508]: lo: Gained carrier Apr 17 00:16:31.278674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 00:16:31.278783 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 00:16:31.304824 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 00:16:31.322876 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 00:16:31.322876 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 00:16:31.322876 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 00:16:31.304907 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 00:16:31.392032 extend-filesystems[1513]: Resized filesystem in /dev/vda9 Apr 17 00:16:31.318453 systemd[1]: Started update-engine.service - Update Engine. Apr 17 00:16:31.322514 systemd-networkd[1508]: Enumeration completed Apr 17 00:16:31.332426 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 00:16:31.397030 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 00:16:31.397688 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 00:16:31.535916 systemd[1]: Reached target network.target - Network. Apr 17 00:16:31.582613 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 00:16:31.703020 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 00:16:31.725107 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 00:16:31.782864 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 00:16:31.839532 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Apr 17 00:16:31.914945 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 00:16:31.950048 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 00:16:32.184981 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 00:16:32.297638 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 00:16:32.332018 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 00:16:32.470947 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 00:16:32.799244 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 00:16:33.287781 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 00:16:33.890988 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:16:33.891037 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 00:16:33.917112 systemd-networkd[1508]: eth0: Link UP Apr 17 00:16:33.925300 systemd-networkd[1508]: eth0: Gained carrier Apr 17 00:16:33.925328 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:16:34.072678 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 00:16:34.073973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 00:16:34.090073 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Apr 17 00:16:35.195461 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 00:16:35.209889 systemd-timesyncd[1473]: Initial clock synchronization to Fri 2026-04-17 00:16:35.194422 UTC. Apr 17 00:16:35.214654 systemd-resolved[1419]: Clock change detected. Flushing caches. Apr 17 00:16:35.251953 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 00:16:35.318409 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 00:16:35.521656 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 00:16:35.601683 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:56700.service - OpenSSH per-connection server daemon (10.0.0.1:56700). Apr 17 00:16:35.758318 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 00:16:35.758649 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 00:16:35.785781 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 00:16:35.950666 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 00:16:35.979027 kernel: ACPI: button: Power Button [PWRF] Apr 17 00:16:36.013302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 00:16:36.368376 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 00:16:36.378810 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 00:16:36.507020 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 00:16:36.689842 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 00:16:36.777482 locksmithd[1575]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 00:16:36.798589 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 00:16:36.807655 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 00:16:36.873562 systemd-networkd[1508]: eth0: Gained IPv6LL Apr 17 00:16:37.064669 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 00:16:37.091761 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 00:16:37.182593 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 00:16:37.254479 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:16:37.283404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:16:37.305197 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 00:16:37.317003 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:16:37.535474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 00:16:37.544950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 00:16:37.559747 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 00:16:38.640445 systemd-logind[1520]: New session 1 of user core. Apr 17 00:16:38.763333 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 00:16:38.763757 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 00:16:38.775930 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 00:16:39.481019 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 00:16:39.767278 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 00:16:40.088395 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 00:16:40.327782 systemd-logind[1520]: New session c1 of user core. Apr 17 00:16:41.812826 containerd[1581]: time="2026-04-17T00:16:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 00:16:41.895895 containerd[1581]: time="2026-04-17T00:16:41.893753085Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 00:16:42.272739 systemd[1658]: Queued start job for default target default.target. Apr 17 00:16:42.302360 systemd[1658]: Created slice app.slice - User Application Slice. Apr 17 00:16:42.302440 systemd[1658]: Reached target paths.target - Paths. Apr 17 00:16:42.302477 systemd[1658]: Reached target timers.target - Timers. Apr 17 00:16:42.409904 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 00:16:42.664716 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 00:16:42.799331 systemd[1658]: Reached target sockets.target - Sockets. Apr 17 00:16:42.799887 systemd[1658]: Reached target basic.target - Basic System. Apr 17 00:16:42.799916 systemd[1658]: Reached target default.target - Main User Target. Apr 17 00:16:42.800024 systemd[1658]: Startup finished in 2.217s. Apr 17 00:16:42.820573 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 00:16:43.059539 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 00:16:43.093562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:16:43.663429 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Apr 17 00:16:45.462181 containerd[1581]: time="2026-04-17T00:16:45.423618618Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="631.135µs" Apr 17 00:16:45.462181 containerd[1581]: time="2026-04-17T00:16:45.451701096Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 00:16:45.462181 containerd[1581]: time="2026-04-17T00:16:45.451967287Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 00:16:45.502595 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 00:16:45.650649 containerd[1581]: time="2026-04-17T00:16:45.650192411Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 00:16:45.690007 containerd[1581]: time="2026-04-17T00:16:45.678804856Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 00:16:45.775747 containerd[1581]: time="2026-04-17T00:16:45.767231436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 00:16:45.850693 containerd[1581]: time="2026-04-17T00:16:45.815655348Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 00:16:45.909866 containerd[1581]: time="2026-04-17T00:16:45.854606002Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 00:16:46.351280 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:16:46.644916 containerd[1581]: time="2026-04-17T00:16:46.628310708Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 00:16:46.645018 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:16:47.259997 containerd[1581]: time="2026-04-17T00:16:47.150960348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 00:16:47.829934 tar[1535]: linux-amd64/README.md Apr 17 00:16:47.853973 systemd-logind[1520]: New session 2 of user core. Apr 17 00:16:48.123942 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 00:16:48.907249 containerd[1581]: time="2026-04-17T00:16:48.709289171Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 00:16:49.285688 containerd[1581]: time="2026-04-17T00:16:49.014569276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 00:16:49.499194 containerd[1581]: time="2026-04-17T00:16:49.495001766Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 00:16:49.682883 sshd[1687]: Connection closed by 10.0.0.1 port 40818 Apr 17 00:16:49.688737 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Apr 17 00:16:50.027297 containerd[1581]: time="2026-04-17T00:16:50.000616497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 00:16:50.114863 systemd-logind[1520]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 00:16:50.257490 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:40818.service: Deactivated successfully. Apr 17 00:16:50.568751 containerd[1581]: time="2026-04-17T00:16:50.145931847Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 00:16:50.793869 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 00:16:50.817755 containerd[1581]: time="2026-04-17T00:16:50.802179995Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 00:16:50.961292 containerd[1581]: time="2026-04-17T00:16:50.829391181Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 00:16:50.894983 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 00:16:51.778634 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Apr 17 00:16:52.066276 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:59182.service - OpenSSH per-connection server daemon (10.0.0.1:59182). Apr 17 00:16:52.250729 containerd[1581]: time="2026-04-17T00:16:52.248643071Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 00:16:52.301831 containerd[1581]: time="2026-04-17T00:16:52.294882884Z" level=info msg="metadata content store policy set" policy=shared Apr 17 00:16:52.567914 systemd-logind[1520]: Removed session 2. Apr 17 00:16:52.822607 containerd[1581]: time="2026-04-17T00:16:52.805619335Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.843262360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.843740678Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.843842707Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.843984753Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844004777Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844021947Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844127425Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844218656Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844231276Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844240825Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844251872Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844493592Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844543374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 00:16:52.846611 containerd[1581]: time="2026-04-17T00:16:52.844558739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844605437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844616376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844626137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844635196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844645027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844697481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844709047Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.844746693Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.845021132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:52.979900690Z" level=info msg="Start snapshots syncer" Apr 17 00:16:53.492367 containerd[1581]: time="2026-04-17T00:16:53.053954607Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 00:16:53.492731 containerd[1581]: time="2026-04-17T00:16:53.067590229Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 00:16:53.492731 containerd[1581]: time="2026-04-17T00:16:53.067964511Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.209280040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.477602408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.492628496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.492708407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.492720986Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.492816775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 00:16:53.558932 containerd[1581]: time="2026-04-17T00:16:53.492898289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 00:16:53.695567 containerd[1581]: time="2026-04-17T00:16:53.492913937Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 00:16:53.839752 containerd[1581]: time="2026-04-17T00:16:53.763516554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 00:16:53.985690 containerd[1581]: time="2026-04-17T00:16:53.874539303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.002240757Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003829992Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003862527Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003870653Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003879496Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003886396Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003894161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.003920638Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.004013708Z" level=info msg="runtime interface created" Apr 17 00:16:54.004281 containerd[1581]: time="2026-04-17T00:16:54.004020471Z" level=info msg="created NRI interface" Apr 17 00:16:54.074508 containerd[1581]: time="2026-04-17T00:16:54.051540696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 00:16:54.074706 containerd[1581]: time="2026-04-17T00:16:54.074537512Z" level=info msg="Connect containerd service" Apr 17 00:16:54.080825 containerd[1581]: time="2026-04-17T00:16:54.079630291Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 00:16:54.297750 containerd[1581]: time="2026-04-17T00:16:54.296013396Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 00:16:54.305485 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 59182 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:16:54.369554 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:16:55.003376 systemd-logind[1520]: New session 3 of user core. Apr 17 00:16:55.424350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:16:56.604997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:16:56.678481 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:16:56.792830 containerd[1581]: time="2026-04-17T00:16:56.784467948Z" level=info msg="Start subscribing containerd event" Apr 17 00:16:56.792830 containerd[1581]: time="2026-04-17T00:16:56.784994904Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 00:16:56.792830 containerd[1581]: time="2026-04-17T00:16:56.788457408Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 00:16:56.814643 containerd[1581]: time="2026-04-17T00:16:56.804605658Z" level=info msg="Start recovering state" Apr 17 00:16:56.807768 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819212629Z" level=info msg="Start event monitor" Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819353827Z" level=info msg="Start cni network conf syncer for default" Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819449400Z" level=info msg="Start streaming server" Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819518315Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819524874Z" level=info msg="runtime interface starting up..." Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819530582Z" level=info msg="starting plugins..." Apr 17 00:16:56.820686 containerd[1581]: time="2026-04-17T00:16:56.819547947Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 00:16:56.826589 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 00:16:56.827336 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 00:16:56.827681 containerd[1581]: time="2026-04-17T00:16:56.827600772Z" level=info msg="containerd successfully booted in 15.041790s" Apr 17 00:16:56.829791 systemd[1]: Startup finished in 13.502s (kernel) + 1min 15.715s (initrd) + 57.999s (userspace) = 2min 27.217s. Apr 17 00:16:57.112601 sshd[1717]: Connection closed by 10.0.0.1 port 59182 Apr 17 00:16:57.114706 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Apr 17 00:16:57.133998 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:59182.service: Deactivated successfully. Apr 17 00:16:57.138263 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 00:16:57.264480 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Apr 17 00:16:57.448882 systemd-logind[1520]: Removed session 3. Apr 17 00:17:08.344484 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:43758.service - OpenSSH per-connection server daemon (10.0.0.1:43758). Apr 17 00:17:09.704009 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 43758 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:17:09.706601 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:17:10.075823 systemd-logind[1520]: New session 4 of user core. Apr 17 00:17:10.105656 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 00:17:10.501592 sshd[1732]: Connection closed by 10.0.0.1 port 43758 Apr 17 00:17:10.503855 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Apr 17 00:17:10.578339 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:43758.service: Deactivated successfully. Apr 17 00:17:10.652292 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 00:17:10.679431 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Apr 17 00:17:10.693785 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:52416.service - OpenSSH per-connection server daemon (10.0.0.1:52416). Apr 17 00:17:10.703746 systemd-logind[1520]: Removed session 4. Apr 17 00:17:11.566555 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 52416 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:17:11.588016 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:17:11.713988 kubelet[1715]: E0417 00:17:11.712810 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:17:11.774851 systemd-logind[1520]: New session 5 of user core. Apr 17 00:17:11.776908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:17:11.790019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:17:11.793625 systemd[1]: kubelet.service: Consumed 17.949s CPU time, 259.6M memory peak. Apr 17 00:17:11.876791 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 00:17:12.393803 sshd[1742]: Connection closed by 10.0.0.1 port 52416 Apr 17 00:17:12.400969 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Apr 17 00:17:12.433934 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:52416.service: Deactivated successfully. Apr 17 00:17:12.454432 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 00:17:12.484613 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Apr 17 00:17:12.547371 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Apr 17 00:17:12.565581 systemd-logind[1520]: Removed session 5. Apr 17 00:17:14.764689 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:17:15.097303 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:17:16.451220 systemd-logind[1520]: New session 6 of user core. Apr 17 00:17:16.559202 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 00:17:17.667592 update_engine[1524]: I20260417 00:17:17.663826 1524 update_attempter.cc:509] Updating boot flags... Apr 17 00:17:18.099735 sshd[1751]: Connection closed by 10.0.0.1 port 52426 Apr 17 00:17:18.128473 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Apr 17 00:17:18.341948 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:52436.service - OpenSSH per-connection server daemon (10.0.0.1:52436). Apr 17 00:17:18.357582 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:52426.service: Deactivated successfully. Apr 17 00:17:18.357849 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:52426.service: Consumed 1.313s CPU time, 3.8M memory peak. Apr 17 00:17:18.439710 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 00:17:18.571630 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Apr 17 00:17:18.593196 systemd-logind[1520]: Removed session 6. Apr 17 00:17:18.965792 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 52436 ssh2: RSA SHA256:MHfIcFfe65TofFgVCIPqFAtVPMQGq/OUEkQWKadPMKg Apr 17 00:17:18.978631 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:17:19.940803 systemd-logind[1520]: New session 7 of user core. Apr 17 00:17:20.372899 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 00:17:21.233443 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 00:17:21.233782 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:17:21.861367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 00:17:21.909609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:17:25.029127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:17:25.099757 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:17:27.681537 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 00:17:27.794627 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 00:17:29.129493 kubelet[1803]: E0417 00:17:29.123892 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:17:29.144175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:17:29.173822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:17:29.207432 systemd[1]: kubelet.service: Consumed 5.251s CPU time, 111M memory peak. Apr 17 00:17:30.483295 dockerd[1813]: time="2026-04-17T00:17:30.482206784Z" level=info msg="Starting up" Apr 17 00:17:30.497551 dockerd[1813]: time="2026-04-17T00:17:30.495927462Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 00:17:30.817355 dockerd[1813]: time="2026-04-17T00:17:30.813417801Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 00:17:31.310529 dockerd[1813]: time="2026-04-17T00:17:31.309023833Z" level=info msg="Loading containers: start." Apr 17 00:17:31.379149 kernel: Initializing XFRM netlink socket Apr 17 00:17:35.407761 systemd-networkd[1508]: docker0: Link UP Apr 17 00:17:35.553812 dockerd[1813]: time="2026-04-17T00:17:35.552301426Z" level=info msg="Loading containers: done." Apr 17 00:17:35.817900 dockerd[1813]: time="2026-04-17T00:17:35.817313545Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 00:17:35.825806 dockerd[1813]: time="2026-04-17T00:17:35.820299146Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 00:17:35.825806 dockerd[1813]: time="2026-04-17T00:17:35.820598391Z" level=info msg="Initializing buildkit" Apr 17 00:17:36.463310 dockerd[1813]: time="2026-04-17T00:17:36.441901451Z" level=info msg="Completed buildkit initialization" Apr 17 00:17:36.665607 dockerd[1813]: time="2026-04-17T00:17:36.665083902Z" level=info msg="Daemon has completed initialization" Apr 17 00:17:36.665607 dockerd[1813]: time="2026-04-17T00:17:36.665639910Z" level=info msg="API listen on /run/docker.sock" Apr 17 00:17:36.675965 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 00:17:39.410931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 00:17:39.889740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:17:42.547235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:17:42.577621 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:17:43.608339 containerd[1581]: time="2026-04-17T00:17:43.607737946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 00:17:44.144103 kubelet[2039]: E0417 00:17:44.143933 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:17:44.146416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:17:44.146679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:17:44.147265 systemd[1]: kubelet.service: Consumed 2.663s CPU time, 108.9M memory peak. Apr 17 00:17:47.212122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170834207.mount: Deactivated successfully. Apr 17 00:17:54.409731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 00:17:54.419024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:17:55.431635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:17:55.450744 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:17:59.920009 kubelet[2106]: E0417 00:17:59.919439 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:17:59.937245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:17:59.937544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:17:59.947451 systemd[1]: kubelet.service: Consumed 4.317s CPU time, 109.6M memory peak. Apr 17 00:18:06.084191 containerd[1581]: time="2026-04-17T00:18:06.081458185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:06.134807 containerd[1581]: time="2026-04-17T00:18:06.084627800Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 17 00:18:06.315249 containerd[1581]: time="2026-04-17T00:18:06.310763456Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:06.689696 containerd[1581]: time="2026-04-17T00:18:06.688634744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:06.813453 containerd[1581]: time="2026-04-17T00:18:06.811709975Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 23.203674655s" Apr 17 00:18:06.816308 containerd[1581]: time="2026-04-17T00:18:06.814121833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 00:18:06.869656 containerd[1581]: time="2026-04-17T00:18:06.866835175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 00:18:10.599204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 17 00:18:10.673306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:18:11.822432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:18:11.844757 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:18:14.154360 kubelet[2134]: E0417 00:18:14.153629 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:18:14.166163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:18:14.166411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:18:14.167150 systemd[1]: kubelet.service: Consumed 2.654s CPU time, 114.5M memory peak. Apr 17 00:18:16.606517 containerd[1581]: time="2026-04-17T00:18:16.604310747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:16.609600 containerd[1581]: time="2026-04-17T00:18:16.609250960Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 17 00:18:16.699289 containerd[1581]: time="2026-04-17T00:18:16.698528373Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:16.993554 containerd[1581]: time="2026-04-17T00:18:16.990480893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:17.014218 containerd[1581]: time="2026-04-17T00:18:17.013277879Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 10.142251843s" Apr 17 00:18:17.014218 containerd[1581]: time="2026-04-17T00:18:17.013736505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 00:18:17.083902 containerd[1581]: time="2026-04-17T00:18:17.082486891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 00:18:21.241005 containerd[1581]: time="2026-04-17T00:18:21.240138193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:21.244282 containerd[1581]: time="2026-04-17T00:18:21.242398718Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 17 00:18:21.245820 containerd[1581]: time="2026-04-17T00:18:21.245696873Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:21.270328 containerd[1581]: time="2026-04-17T00:18:21.268143513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:18:21.273525 containerd[1581]: time="2026-04-17T00:18:21.273322525Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 4.189436146s" Apr 17 00:18:21.273825 containerd[1581]: time="2026-04-17T00:18:21.273633476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 00:18:21.283014 containerd[1581]: time="2026-04-17T00:18:21.282706513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 00:18:24.516867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 17 00:18:24.592431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:18:26.166988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:18:26.196264 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:18:28.630380 kubelet[2158]: E0417 00:18:28.629917 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:18:28.643548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:18:28.643906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:18:28.649574 systemd[1]: kubelet.service: Consumed 3.042s CPU time, 109.4M memory peak. Apr 17 00:18:39.017899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 17 00:18:39.099247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:18:43.274833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:18:43.389164 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:18:50.638322 kubelet[2175]: E0417 00:18:50.637879 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:18:50.652283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:18:50.652729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:18:50.653764 systemd[1]: kubelet.service: Consumed 8.588s CPU time, 110.2M memory peak. Apr 17 00:18:51.593015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692492194.mount: Deactivated successfully. Apr 17 00:19:01.114802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 17 00:19:01.595536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:19:06.982339 containerd[1581]: time="2026-04-17T00:19:06.965615699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:19:06.995562 containerd[1581]: time="2026-04-17T00:19:06.990797256Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 17 00:19:07.215967 containerd[1581]: time="2026-04-17T00:19:07.215528364Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:19:08.449333 containerd[1581]: time="2026-04-17T00:19:08.448882440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:19:08.857508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:19:09.275726 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:19:09.365809 containerd[1581]: time="2026-04-17T00:19:09.279209467Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 47.963503158s" Apr 17 00:19:09.365809 containerd[1581]: time="2026-04-17T00:19:09.281587563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 00:19:09.682512 containerd[1581]: time="2026-04-17T00:19:09.676018449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 00:19:25.378313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148390083.mount: Deactivated successfully. Apr 17 00:19:31.197479 kubelet[2194]: E0417 00:19:31.135975 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:19:31.373955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:19:31.396791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:19:31.484350 systemd[1]: kubelet.service: Consumed 20.520s CPU time, 110.5M memory peak. Apr 17 00:19:41.514546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 17 00:19:41.756142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:19:47.698731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:19:47.894543 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:19:51.389827 kubelet[2222]: E0417 00:19:51.388402 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:19:51.415459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:19:51.415786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:19:51.452850 systemd[1]: kubelet.service: Consumed 6.484s CPU time, 110.6M memory peak. Apr 17 00:20:01.874854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 17 00:20:01.990746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:20:12.863012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:20:13.180143 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:20:32.425893 kubelet[2276]: E0417 00:20:32.423409 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:20:32.505914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:20:32.507843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:20:32.508992 systemd[1]: kubelet.service: Consumed 20.388s CPU time, 110.4M memory peak. Apr 17 00:20:34.559551 containerd[1581]: time="2026-04-17T00:20:34.556792434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:34.564860 containerd[1581]: time="2026-04-17T00:20:34.563275499Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 17 00:20:34.676503 containerd[1581]: time="2026-04-17T00:20:34.675418808Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:35.346872 containerd[1581]: time="2026-04-17T00:20:35.345581376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:36.232890 containerd[1581]: time="2026-04-17T00:20:36.221239797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1m26.538240678s" Apr 17 00:20:36.325877 containerd[1581]: time="2026-04-17T00:20:36.237303901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 00:20:36.325877 containerd[1581]: time="2026-04-17T00:20:36.307293523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 00:20:42.600423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 17 00:20:42.617284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:20:44.656954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189174345.mount: Deactivated successfully. Apr 17 00:20:45.396206 containerd[1581]: time="2026-04-17T00:20:45.394821438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:45.451552 containerd[1581]: time="2026-04-17T00:20:45.436539991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 00:20:45.453921 containerd[1581]: time="2026-04-17T00:20:45.453810580Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:46.397954 containerd[1581]: time="2026-04-17T00:20:46.395797194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:20:46.416200 containerd[1581]: time="2026-04-17T00:20:46.415631011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 10.107940207s" Apr 17 00:20:46.416200 containerd[1581]: time="2026-04-17T00:20:46.415994057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 00:20:46.467821 containerd[1581]: time="2026-04-17T00:20:46.465116826Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 00:20:50.723013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:20:50.949234 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:20:56.376493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244205173.mount: Deactivated successfully. Apr 17 00:21:01.525524 kubelet[2299]: E0417 00:21:01.501982 2299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:21:01.544796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:21:01.564290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:21:01.565772 systemd[1]: kubelet.service: Consumed 12.837s CPU time, 110.6M memory peak. Apr 17 00:21:11.664856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 17 00:21:11.734771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:16.255762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:16.575809 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:21:28.013470 kubelet[2329]: E0417 00:21:28.012961 2329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:21:28.025140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:21:28.025502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:21:28.050583 systemd[1]: kubelet.service: Consumed 12.362s CPU time, 110.9M memory peak. Apr 17 00:21:38.251809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 17 00:21:38.372755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:41.271028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:41.378400 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:21:44.398320 containerd[1581]: time="2026-04-17T00:21:44.397766064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:44.426709 containerd[1581]: time="2026-04-17T00:21:44.406525551Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 17 00:21:44.427159 kubelet[2392]: E0417 00:21:44.398512 2392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:21:44.427388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:21:44.427816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:21:44.428765 systemd[1]: kubelet.service: Consumed 4.049s CPU time, 111.1M memory peak. Apr 17 00:21:44.434089 containerd[1581]: time="2026-04-17T00:21:44.433695466Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:44.476181 containerd[1581]: time="2026-04-17T00:21:44.475297000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:21:44.502912 containerd[1581]: time="2026-04-17T00:21:44.502057084Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 58.00516375s" Apr 17 00:21:44.502912 containerd[1581]: time="2026-04-17T00:21:44.502445910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 00:21:54.666739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 17 00:21:54.977863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:21:58.050958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:21:58.133620 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:22:04.114369 kubelet[2435]: E0417 00:22:04.107822 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:22:04.209490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:22:04.209871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:22:04.243977 systemd[1]: kubelet.service: Consumed 5.988s CPU time, 110.9M memory peak. Apr 17 00:22:14.375197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 17 00:22:14.455772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:22:19.713930 update_engine[1524]: I20260417 00:22:19.712399 1524 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 17 00:22:19.718018 update_engine[1524]: I20260417 00:22:19.714442 1524 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 17 00:22:19.718018 update_engine[1524]: I20260417 00:22:19.717248 1524 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 17 00:22:19.718018 update_engine[1524]: I20260417 00:22:19.717844 1524 omaha_request_params.cc:62] Current group set to stable Apr 17 00:22:19.718484 update_engine[1524]: I20260417 00:22:19.718449 1524 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 17 00:22:19.719225 update_engine[1524]: I20260417 00:22:19.719205 1524 update_attempter.cc:643] Scheduling an action processor start. Apr 17 00:22:19.719501 update_engine[1524]: I20260417 00:22:19.719480 1524 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 00:22:19.723836 update_engine[1524]: I20260417 00:22:19.723567 1524 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 17 00:22:19.739744 update_engine[1524]: I20260417 00:22:19.725816 1524 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 00:22:19.739744 update_engine[1524]: I20260417 00:22:19.725838 1524 omaha_request_action.cc:272] Request: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: Apr 17 00:22:19.739744 update_engine[1524]: I20260417 00:22:19.725845 1524 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:22:19.754021 locksmithd[1575]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 17 00:22:19.812880 update_engine[1524]: I20260417 00:22:19.756062 1524 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:22:19.925433 update_engine[1524]: I20260417 00:22:19.922981 1524 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:22:19.927701 update_engine[1524]: E20260417 00:22:19.925851 1524 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:22:19.931485 update_engine[1524]: I20260417 00:22:19.927315 1524 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 17 00:22:20.497990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:22:20.669081 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:22:30.622307 update_engine[1524]: I20260417 00:22:30.615844 1524 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:22:30.717026 update_engine[1524]: I20260417 00:22:30.667554 1524 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:22:30.717026 update_engine[1524]: I20260417 00:22:30.682621 1524 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:22:30.717026 update_engine[1524]: E20260417 00:22:30.697022 1524 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:22:30.717026 update_engine[1524]: I20260417 00:22:30.711629 1524 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 17 00:22:39.758452 kubelet[2455]: E0417 00:22:39.744691 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:22:39.808705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:22:39.809778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:22:39.892475 systemd[1]: kubelet.service: Consumed 17.369s CPU time, 110.9M memory peak. Apr 17 00:22:40.640687 update_engine[1524]: I20260417 00:22:40.628017 1524 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:22:40.680203 update_engine[1524]: I20260417 00:22:40.644076 1524 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:22:40.681334 update_engine[1524]: I20260417 00:22:40.678310 1524 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:22:40.717627 update_engine[1524]: E20260417 00:22:40.716346 1524 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:22:40.740430 update_engine[1524]: I20260417 00:22:40.718868 1524 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 17 00:22:49.945221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 17 00:22:50.193886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:22:50.624752 update_engine[1524]: I20260417 00:22:50.622314 1524 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:22:50.633411 update_engine[1524]: I20260417 00:22:50.628271 1524 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:22:50.633411 update_engine[1524]: I20260417 00:22:50.632561 1524 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:22:50.646564 update_engine[1524]: E20260417 00:22:50.645208 1524 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:22:50.646564 update_engine[1524]: I20260417 00:22:50.645485 1524 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 00:22:50.646564 update_engine[1524]: I20260417 00:22:50.645499 1524 omaha_request_action.cc:617] Omaha request response: Apr 17 00:22:50.646564 update_engine[1524]: E20260417 00:22:50.645856 1524 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 17 00:22:50.646564 update_engine[1524]: I20260417 00:22:50.646554 1524 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 17 00:22:50.646564 update_engine[1524]: I20260417 00:22:50.646571 1524 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646577 1524 update_attempter.cc:306] Processing Done. Apr 17 00:22:50.646930 update_engine[1524]: E20260417 00:22:50.646592 1524 update_attempter.cc:619] Update failed. Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646620 1524 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646627 1524 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646633 1524 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646744 1524 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646790 1524 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646796 1524 omaha_request_action.cc:272] Request: Apr 17 00:22:50.646930 update_engine[1524]: Apr 17 00:22:50.646930 update_engine[1524]: Apr 17 00:22:50.646930 update_engine[1524]: Apr 17 00:22:50.646930 update_engine[1524]: Apr 17 00:22:50.646930 update_engine[1524]: Apr 17 00:22:50.646930 update_engine[1524]: Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646803 1524 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:22:50.646930 update_engine[1524]: I20260417 00:22:50.646829 1524 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:22:50.666682 update_engine[1524]: I20260417 00:22:50.660283 1524 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:22:50.676930 update_engine[1524]: E20260417 00:22:50.675448 1524 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:22:50.725324 update_engine[1524]: I20260417 00:22:50.701383 1524 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 00:22:50.725324 update_engine[1524]: I20260417 00:22:50.710968 1524 omaha_request_action.cc:617] Omaha request response: Apr 17 00:22:50.751389 locksmithd[1575]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 17 00:22:50.765904 update_engine[1524]: I20260417 00:22:50.745410 1524 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 00:22:50.765904 update_engine[1524]: I20260417 00:22:50.745944 1524 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 00:22:50.765904 update_engine[1524]: I20260417 00:22:50.745954 1524 update_attempter.cc:306] Processing Done. Apr 17 00:22:50.765904 update_engine[1524]: I20260417 00:22:50.746953 1524 update_attempter.cc:310] Error event sent. Apr 17 00:22:50.765904 update_engine[1524]: I20260417 00:22:50.750613 1524 update_check_scheduler.cc:74] Next update check in 44m24s Apr 17 00:22:50.766706 locksmithd[1575]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 17 00:22:50.766818 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 00:22:50.768905 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 00:22:50.778259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:22:51.317466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:22:52.790588 systemd[1]: Reload requested from client PID 2475 ('systemctl') (unit session-7.scope)... Apr 17 00:22:52.827876 systemd[1]: Reloading... Apr 17 00:22:58.616216 zram_generator::config[2521]: No configuration found. Apr 17 00:23:14.021880 systemd[1]: Reloading finished in 21148 ms. Apr 17 00:23:14.741298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:23:14.777710 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:23:15.081588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:23:15.115341 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 00:23:15.327774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:23:15.343798 systemd[1]: kubelet.service: Consumed 2.223s CPU time, 102.1M memory peak. Apr 17 00:23:15.913068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:23:21.138491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:23:21.421509 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:23:25.997218 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 00:23:25.997218 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:23:26.005600 kubelet[2574]: I0417 00:23:25.997985 2574 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 00:23:33.195412 kubelet[2574]: I0417 00:23:33.194562 2574 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 00:23:33.195412 kubelet[2574]: I0417 00:23:33.195137 2574 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 00:23:33.195412 kubelet[2574]: I0417 00:23:33.195486 2574 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 00:23:33.195412 kubelet[2574]: I0417 00:23:33.195528 2574 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 00:23:33.239673 kubelet[2574]: I0417 00:23:33.196809 2574 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 00:23:33.424182 kubelet[2574]: E0417 00:23:33.423536 2574 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:23:33.428915 kubelet[2574]: I0417 00:23:33.424925 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 00:23:33.719712 kubelet[2574]: I0417 00:23:33.713918 2574 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 00:23:33.987086 kubelet[2574]: I0417 00:23:33.982987 2574 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 00:23:33.994351 kubelet[2574]: I0417 00:23:33.992231 2574 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 00:23:33.997170 kubelet[2574]: I0417 00:23:33.994517 2574 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 00:23:33.997170 kubelet[2574]: I0417 00:23:33.996476 2574 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 00:23:33.997170 kubelet[2574]: I0417 00:23:33.996539 2574 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 00:23:33.997638 kubelet[2574]: I0417 00:23:33.997384 2574 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 00:23:34.093962 kubelet[2574]: I0417 00:23:34.091919 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:23:34.103178 kubelet[2574]: I0417 00:23:34.101671 2574 kubelet.go:475] "Attempting to sync node with API server" Apr 17 00:23:34.106584 kubelet[2574]: I0417 00:23:34.105517 2574 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 00:23:34.106584 kubelet[2574]: I0417 00:23:34.106543 2574 kubelet.go:387] "Adding apiserver pod source" Apr 17 00:23:34.106777 kubelet[2574]: I0417 00:23:34.106710 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 00:23:34.108384 kubelet[2574]: E0417 00:23:34.108196 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:23:34.108968 kubelet[2574]: E0417 00:23:34.108901 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:23:34.177794 kubelet[2574]: I0417 00:23:34.176971 2574 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 00:23:34.199299 kubelet[2574]: I0417 00:23:34.197438 2574 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 00:23:34.199299 kubelet[2574]: I0417 00:23:34.199215 2574 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 00:23:34.204120 kubelet[2574]: W0417 00:23:34.201694 2574 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 00:23:34.234379 kubelet[2574]: I0417 00:23:34.232453 2574 server.go:1262] "Started kubelet" Apr 17 00:23:34.234379 kubelet[2574]: I0417 00:23:34.232703 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 00:23:34.242237 kubelet[2574]: I0417 00:23:34.240078 2574 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 00:23:34.242237 kubelet[2574]: I0417 00:23:34.241847 2574 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 00:23:34.282464 kubelet[2574]: I0417 00:23:34.277526 2574 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 00:23:34.288884 kubelet[2574]: I0417 00:23:34.288604 2574 server.go:310] "Adding debug handlers to kubelet server" Apr 17 00:23:34.289860 kubelet[2574]: I0417 00:23:34.289844 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 00:23:34.290890 kubelet[2574]: I0417 00:23:34.290814 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 00:23:34.299760 kubelet[2574]: I0417 00:23:34.299304 2574 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 00:23:34.301529 kubelet[2574]: E0417 00:23:34.301419 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:34.304769 kubelet[2574]: I0417 00:23:34.304687 2574 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 00:23:34.304964 kubelet[2574]: I0417 00:23:34.304933 2574 reconciler.go:29] "Reconciler: start to sync state" Apr 17 00:23:34.305141 kubelet[2574]: I0417 00:23:34.305120 2574 factory.go:223] Registration of the systemd container factory successfully Apr 17 00:23:34.305278 kubelet[2574]: I0417 00:23:34.305239 2574 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 00:23:34.306425 kubelet[2574]: E0417 00:23:34.306320 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:23:34.306796 kubelet[2574]: E0417 00:23:34.306697 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 17 00:23:34.307711 kubelet[2574]: I0417 00:23:34.307665 2574 factory.go:223] Registration of the containerd container factory successfully Apr 17 00:23:34.411307 kubelet[2574]: E0417 00:23:34.407410 2574 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 00:23:34.432954 kubelet[2574]: E0417 00:23:34.432213 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:34.432954 kubelet[2574]: E0417 00:23:34.416929 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6fd1f5f7e8331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,LastTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:23:34.512858 kubelet[2574]: E0417 00:23:34.512192 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 17 00:23:34.574547 kubelet[2574]: E0417 00:23:34.572954 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:34.702101 kubelet[2574]: E0417 00:23:34.685486 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:34.836711 kubelet[2574]: E0417 00:23:34.818991 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:35.445933 kubelet[2574]: E0417 00:23:35.445557 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:35.453692 kubelet[2574]: E0417 00:23:35.450872 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 17 00:23:35.643991 kubelet[2574]: E0417 00:23:35.643227 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:35.648324 kubelet[2574]: E0417 00:23:35.644556 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:23:35.654741 kubelet[2574]: E0417 00:23:35.654558 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:23:35.656126 kubelet[2574]: E0417 00:23:35.654827 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:23:35.656126 kubelet[2574]: E0417 00:23:35.655222 2574 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:23:35.656313 kubelet[2574]: I0417 00:23:35.655423 2574 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 00:23:35.663828 kubelet[2574]: I0417 00:23:35.662744 2574 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 00:23:35.669691 kubelet[2574]: I0417 00:23:35.662689 2574 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 00:23:35.669691 kubelet[2574]: I0417 00:23:35.667259 2574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 00:23:35.669691 kubelet[2574]: I0417 00:23:35.667526 2574 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 00:23:35.669691 kubelet[2574]: I0417 00:23:35.668839 2574 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 00:23:35.669691 kubelet[2574]: E0417 00:23:35.669151 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:23:35.684585 kubelet[2574]: I0417 00:23:35.672989 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:23:35.699077 kubelet[2574]: E0417 00:23:35.697466 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:23:35.708397 kubelet[2574]: I0417 00:23:35.707188 2574 policy_none.go:49] "None policy: Start" Apr 17 00:23:35.708397 kubelet[2574]: I0417 00:23:35.708373 2574 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 00:23:35.708397 kubelet[2574]: I0417 00:23:35.708424 2574 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 00:23:35.764811 kubelet[2574]: E0417 00:23:35.764295 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:35.785286 kubelet[2574]: E0417 00:23:35.782910 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:23:35.796496 kubelet[2574]: I0417 00:23:35.783365 2574 policy_none.go:47] "Start" Apr 17 00:23:35.874167 kubelet[2574]: E0417 00:23:35.873244 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.102135 kubelet[2574]: E0417 00:23:36.012997 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.107872 kubelet[2574]: E0417 00:23:36.013022 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 00:23:36.184520 kubelet[2574]: E0417 00:23:36.179016 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.196375 kubelet[2574]: E0417 00:23:36.186474 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6fd1f5f7e8331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,LastTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:23:36.289683 kubelet[2574]: E0417 00:23:36.287501 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.340278 kubelet[2574]: E0417 00:23:36.339003 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 17 00:23:36.410347 kubelet[2574]: E0417 00:23:36.400158 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.424448 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 00:23:36.507373 kubelet[2574]: E0417 00:23:36.506863 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.507373 kubelet[2574]: E0417 00:23:36.506887 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 00:23:36.611706 kubelet[2574]: E0417 00:23:36.610434 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.764897 kubelet[2574]: E0417 00:23:36.718615 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:36.889386 kubelet[2574]: E0417 00:23:36.878886 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:37.025778 kubelet[2574]: E0417 00:23:37.018136 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:37.059137 kubelet[2574]: E0417 00:23:37.055148 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:23:37.096568 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 00:23:37.133558 kubelet[2574]: E0417 00:23:37.132952 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:37.152987 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 00:23:37.210357 kubelet[2574]: E0417 00:23:37.210205 2574 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 00:23:37.216398 kubelet[2574]: I0417 00:23:37.213013 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 00:23:37.216398 kubelet[2574]: I0417 00:23:37.214991 2574 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 00:23:37.271761 kubelet[2574]: I0417 00:23:37.257775 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 00:23:37.273506 kubelet[2574]: E0417 00:23:37.272529 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:23:37.284838 kubelet[2574]: E0417 00:23:37.284552 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 00:23:37.285748 kubelet[2574]: E0417 00:23:37.285235 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:23:37.511213 kubelet[2574]: I0417 00:23:37.510377 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e54c01254a8f7ce80e4d0140bee4bbdd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e54c01254a8f7ce80e4d0140bee4bbdd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 00:23:37.513983 kubelet[2574]: I0417 00:23:37.513525 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e54c01254a8f7ce80e4d0140bee4bbdd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e54c01254a8f7ce80e4d0140bee4bbdd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 00:23:37.513983 kubelet[2574]: I0417 00:23:37.513869 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e54c01254a8f7ce80e4d0140bee4bbdd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e54c01254a8f7ce80e4d0140bee4bbdd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 00:23:37.515232 kubelet[2574]: I0417 00:23:37.514195 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:37.533136 kubelet[2574]: E0417 00:23:37.532783 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 00:23:37.577514 kubelet[2574]: E0417 00:23:37.577183 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:23:37.619213 kubelet[2574]: I0417 00:23:37.617117 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 00:23:37.620592 kubelet[2574]: I0417 00:23:37.620391 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:23:37.620592 kubelet[2574]: I0417 00:23:37.620435 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:23:37.620592 kubelet[2574]: I0417 00:23:37.620455 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:23:37.620592 kubelet[2574]: I0417 00:23:37.620485 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:23:37.620592 kubelet[2574]: I0417 00:23:37.620501 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:23:37.643732 systemd[1]: Created slice kubepods-burstable-pode54c01254a8f7ce80e4d0140bee4bbdd.slice - libcontainer container kubepods-burstable-pode54c01254a8f7ce80e4d0140bee4bbdd.slice. Apr 17 00:23:37.685983 kubelet[2574]: E0417 00:23:37.684492 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:37.720724 kubelet[2574]: E0417 00:23:37.719907 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:37.838258 containerd[1581]: time="2026-04-17T00:23:37.832604580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e54c01254a8f7ce80e4d0140bee4bbdd,Namespace:kube-system,Attempt:0,}" Apr 17 00:23:37.849105 kubelet[2574]: I0417 00:23:37.844554 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:37.849399 kubelet[2574]: E0417 00:23:37.849113 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 00:23:37.858975 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 17 00:23:37.925724 kubelet[2574]: E0417 00:23:37.925514 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:23:37.938606 kubelet[2574]: E0417 00:23:37.937229 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:37.959749 kubelet[2574]: E0417 00:23:37.958591 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="3.2s" Apr 17 00:23:37.964922 kubelet[2574]: E0417 00:23:37.963951 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:37.979294 containerd[1581]: time="2026-04-17T00:23:37.979193294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 17 00:23:37.981292 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 17 00:23:38.143124 kubelet[2574]: E0417 00:23:38.142656 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:38.203891 kubelet[2574]: E0417 00:23:38.203550 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:23:38.216476 kubelet[2574]: E0417 00:23:38.215886 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:38.312869 containerd[1581]: time="2026-04-17T00:23:38.311095218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 17 00:23:38.339541 kubelet[2574]: I0417 00:23:38.338787 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:38.341828 kubelet[2574]: E0417 00:23:38.341732 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 00:23:38.782373 kubelet[2574]: E0417 00:23:38.781337 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:23:39.229476 kubelet[2574]: I0417 00:23:39.229125 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:39.279495 kubelet[2574]: E0417 00:23:39.278750 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 00:23:39.951472 kubelet[2574]: E0417 00:23:39.946858 2574 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:23:40.284689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199039654.mount: Deactivated successfully. Apr 17 00:23:40.510269 containerd[1581]: time="2026-04-17T00:23:40.508386185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:23:40.597738 containerd[1581]: time="2026-04-17T00:23:40.591847809Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:23:40.621926 containerd[1581]: time="2026-04-17T00:23:40.621608303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 00:23:40.621926 containerd[1581]: time="2026-04-17T00:23:40.621618239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 00:23:40.642782 containerd[1581]: time="2026-04-17T00:23:40.641927648Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:23:40.648410 containerd[1581]: time="2026-04-17T00:23:40.648221844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 00:23:40.839921 containerd[1581]: time="2026-04-17T00:23:40.839233885Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:23:40.841811 containerd[1581]: time="2026-04-17T00:23:40.840807989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.851451207s" Apr 17 00:23:40.841811 containerd[1581]: time="2026-04-17T00:23:40.841457416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:23:40.842518 containerd[1581]: time="2026-04-17T00:23:40.842456792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.962336694s" Apr 17 00:23:40.867224 containerd[1581]: time="2026-04-17T00:23:40.865440953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.487220039s" Apr 17 00:23:40.909246 kubelet[2574]: I0417 00:23:40.908518 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:40.981514 kubelet[2574]: E0417 00:23:40.980977 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 00:23:41.135839 kubelet[2574]: E0417 00:23:41.128849 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:23:41.185000 containerd[1581]: time="2026-04-17T00:23:41.176104191Z" level=info msg="connecting to shim 36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6" address="unix:///run/containerd/s/1fd8616eb94714c1343e38db73f43c8dc81fb060bb133b4d75aee2a1063347ae" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:23:41.197413 kubelet[2574]: E0417 00:23:41.196475 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="6.4s" Apr 17 00:23:41.208256 containerd[1581]: time="2026-04-17T00:23:41.202609837Z" level=info msg="connecting to shim 430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b" address="unix:///run/containerd/s/86aa2eab26b6677f4e45080413338db26ee4889c5f13e2d0c1dda527286a34b1" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:23:41.208256 containerd[1581]: time="2026-04-17T00:23:41.202806154Z" level=info msg="connecting to shim b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4" address="unix:///run/containerd/s/fbcb292c55b3ea2b85f4523ae2012a577477515f94eaee1c5316e632c16c7f6d" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:23:42.864264 kubelet[2574]: E0417 00:23:42.863916 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:23:43.126447 systemd[1]: Started cri-containerd-36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6.scope - libcontainer container 36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6. Apr 17 00:23:43.202204 systemd[1]: Started cri-containerd-430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b.scope - libcontainer container 430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b. Apr 17 00:23:43.293946 systemd[1]: Started cri-containerd-b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4.scope - libcontainer container b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4. Apr 17 00:23:43.541389 kubelet[2574]: E0417 00:23:43.537130 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:23:44.253025 kubelet[2574]: E0417 00:23:44.252621 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:23:44.301982 kubelet[2574]: I0417 00:23:44.300505 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:44.430398 kubelet[2574]: E0417 00:23:44.430244 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 00:23:44.454998 containerd[1581]: time="2026-04-17T00:23:44.454465451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\"" Apr 17 00:23:44.648369 kubelet[2574]: E0417 00:23:44.648166 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:44.844560 containerd[1581]: time="2026-04-17T00:23:44.841489852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\"" Apr 17 00:23:44.852873 containerd[1581]: time="2026-04-17T00:23:44.852784786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e54c01254a8f7ce80e4d0140bee4bbdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4\"" Apr 17 00:23:44.902993 kubelet[2574]: E0417 00:23:44.901198 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:44.905869 kubelet[2574]: E0417 00:23:44.905293 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:45.102984 containerd[1581]: time="2026-04-17T00:23:45.102242836Z" level=info msg="CreateContainer within sandbox \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 00:23:45.290704 containerd[1581]: time="2026-04-17T00:23:45.286585841Z" level=info msg="CreateContainer within sandbox \"b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 00:23:45.401160 containerd[1581]: time="2026-04-17T00:23:45.400858127Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 00:23:45.479298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147803354.mount: Deactivated successfully. Apr 17 00:23:45.511208 containerd[1581]: time="2026-04-17T00:23:45.510108281Z" level=info msg="Container 14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:23:45.530886 containerd[1581]: time="2026-04-17T00:23:45.530487721Z" level=info msg="Container 201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:23:45.677577 containerd[1581]: time="2026-04-17T00:23:45.677192454Z" level=info msg="CreateContainer within sandbox \"b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c\"" Apr 17 00:23:45.692919 containerd[1581]: time="2026-04-17T00:23:45.692409910Z" level=info msg="StartContainer for \"201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c\"" Apr 17 00:23:45.830441 containerd[1581]: time="2026-04-17T00:23:45.829650275Z" level=info msg="connecting to shim 201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c" address="unix:///run/containerd/s/fbcb292c55b3ea2b85f4523ae2012a577477515f94eaee1c5316e632c16c7f6d" protocol=ttrpc version=3 Apr 17 00:23:46.078286 containerd[1581]: time="2026-04-17T00:23:46.064362005Z" level=info msg="CreateContainer within sandbox \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\"" Apr 17 00:23:46.446583 kubelet[2574]: E0417 00:23:46.390492 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6fd1f5f7e8331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,LastTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:23:46.469382 containerd[1581]: time="2026-04-17T00:23:46.468686064Z" level=info msg="StartContainer for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\"" Apr 17 00:23:46.525975 containerd[1581]: time="2026-04-17T00:23:46.519621039Z" level=info msg="Container 87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:23:46.519989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495638448.mount: Deactivated successfully. Apr 17 00:23:46.635162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657342656.mount: Deactivated successfully. Apr 17 00:23:46.635919 containerd[1581]: time="2026-04-17T00:23:46.635888174Z" level=info msg="connecting to shim 14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542" address="unix:///run/containerd/s/1fd8616eb94714c1343e38db73f43c8dc81fb060bb133b4d75aee2a1063347ae" protocol=ttrpc version=3 Apr 17 00:23:46.667700 containerd[1581]: time="2026-04-17T00:23:46.667540200Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\"" Apr 17 00:23:46.671859 containerd[1581]: time="2026-04-17T00:23:46.669896905Z" level=info msg="StartContainer for \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\"" Apr 17 00:23:46.800831 systemd[1]: Started cri-containerd-201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c.scope - libcontainer container 201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c. Apr 17 00:23:46.804949 containerd[1581]: time="2026-04-17T00:23:46.804916063Z" level=info msg="connecting to shim 87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349" address="unix:///run/containerd/s/86aa2eab26b6677f4e45080413338db26ee4889c5f13e2d0c1dda527286a34b1" protocol=ttrpc version=3 Apr 17 00:23:46.835985 systemd[1]: Started cri-containerd-14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542.scope - libcontainer container 14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542. Apr 17 00:23:47.045197 systemd[1]: Started cri-containerd-87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349.scope - libcontainer container 87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349. Apr 17 00:23:47.286479 kubelet[2574]: E0417 00:23:47.285548 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:23:47.315200 containerd[1581]: time="2026-04-17T00:23:47.313643563Z" level=info msg="StartContainer for \"201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c\" returns successfully" Apr 17 00:23:47.894330 kubelet[2574]: E0417 00:23:47.888849 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="7s" Apr 17 00:23:48.590319 containerd[1581]: time="2026-04-17T00:23:48.589742663Z" level=info msg="StartContainer for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" returns successfully" Apr 17 00:23:50.097724 containerd[1581]: time="2026-04-17T00:23:50.095938911Z" level=info msg="StartContainer for \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" returns successfully" Apr 17 00:23:51.367648 kubelet[2574]: I0417 00:23:51.365838 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:23:51.689780 kubelet[2574]: E0417 00:23:51.663545 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:52.021297 kubelet[2574]: E0417 00:23:51.889527 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:54.350575 kubelet[2574]: E0417 00:23:54.347521 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:54.514576 kubelet[2574]: E0417 00:23:54.361602 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:57.479477 kubelet[2574]: E0417 00:23:57.468864 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:23:57.763652 kubelet[2574]: E0417 00:23:57.711015 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:57.763652 kubelet[2574]: E0417 00:23:57.756539 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:57.763652 kubelet[2574]: E0417 00:23:57.756571 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:23:57.763652 kubelet[2574]: E0417 00:23:57.759400 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:57.763652 kubelet[2574]: E0417 00:23:57.759420 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:23:57.763652 kubelet[2574]: E0417 00:23:57.759421 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:00.192778 kubelet[2574]: E0417 00:24:00.005245 2574 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:24:00.718030 kubelet[2574]: E0417 00:24:00.580650 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:00.785853 kubelet[2574]: E0417 00:24:00.782463 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:00.857486 kubelet[2574]: E0417 00:24:00.849005 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:01.049385 kubelet[2574]: E0417 00:24:01.018515 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:01.834356 kubelet[2574]: E0417 00:24:01.816655 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 00:24:01.999454 kubelet[2574]: E0417 00:24:01.992754 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:24:03.112398 kubelet[2574]: E0417 00:24:03.108597 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:24:03.112398 kubelet[2574]: E0417 00:24:03.112288 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:24:03.887465 kubelet[2574]: E0417 00:24:03.884329 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:04.005664 kubelet[2574]: E0417 00:24:03.892352 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:04.577839 kubelet[2574]: E0417 00:24:04.575447 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:04.849279 kubelet[2574]: E0417 00:24:04.817712 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:05.318574 kubelet[2574]: E0417 00:24:05.294345 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 17 00:24:05.586479 kubelet[2574]: E0417 00:24:05.573096 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:24:06.950895 kubelet[2574]: E0417 00:24:06.792643 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1f5f7e8331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,LastTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:24:07.315560 kubelet[2574]: E0417 00:24:07.284692 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:07.315560 kubelet[2574]: E0417 00:24:07.287390 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:07.943999 kubelet[2574]: E0417 00:24:07.936553 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:24:07.975897 kubelet[2574]: E0417 00:24:07.951227 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:08.142457 kubelet[2574]: E0417 00:24:07.984874 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:10.308811 kubelet[2574]: I0417 00:24:10.301503 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:24:11.109294 kubelet[2574]: E0417 00:24:11.103695 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:24:11.120455 kubelet[2574]: E0417 00:24:11.110639 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:24:17.970545 kubelet[2574]: E0417 00:24:17.968467 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:24:21.135877 kubelet[2574]: E0417 00:24:21.051027 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 00:24:22.401537 kubelet[2574]: E0417 00:24:22.399980 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 17 00:24:27.308977 kubelet[2574]: E0417 00:24:27.306936 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1f5f7e8331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,LastTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:24:28.014443 kubelet[2574]: E0417 00:24:28.006870 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:24:28.119522 kubelet[2574]: E0417 00:24:28.116266 2574 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:24:28.243725 kubelet[2574]: E0417 00:24:28.241538 2574 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:24:28.576601 kubelet[2574]: I0417 00:24:28.575697 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:24:31.243820 kubelet[2574]: E0417 00:24:31.241506 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:24:31.918552 kubelet[2574]: E0417 00:24:31.917814 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:24:32.782598 kubelet[2574]: E0417 00:24:32.770605 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:24:38.029031 kubelet[2574]: E0417 00:24:38.011729 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:24:38.057456 kubelet[2574]: E0417 00:24:38.057336 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:24:38.929447 kubelet[2574]: E0417 00:24:38.925313 2574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 00:24:39.735663 kubelet[2574]: E0417 00:24:39.732024 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 17 00:24:46.707234 kubelet[2574]: I0417 00:24:46.706729 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:24:47.494575 kubelet[2574]: E0417 00:24:47.467250 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1f5f7e8331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,LastTimestamp:2026-04-17 00:23:34.232179505 +0000 UTC m=+12.659881885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:24:48.066423 kubelet[2574]: E0417 00:24:48.064823 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:24:50.546681 kubelet[2574]: E0417 00:24:50.507622 2574 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 00:24:54.859459 kubelet[2574]: I0417 00:24:54.515871 2574 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 00:24:55.315469 kubelet[2574]: E0417 00:24:54.865903 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 00:24:58.177013 kubelet[2574]: E0417 00:24:58.168921 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:25:02.549486 kubelet[2574]: E0417 00:25:02.548165 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:02.815573 kubelet[2574]: E0417 00:25:02.698163 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:03.089449 kubelet[2574]: E0417 00:25:03.072674 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:03.298382 kubelet[2574]: E0417 00:25:03.293699 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:03.790483 kubelet[2574]: E0417 00:25:03.599980 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:03.987457 kubelet[2574]: E0417 00:25:03.983446 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:04.305009 kubelet[2574]: E0417 00:25:04.299866 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:04.549624 kubelet[2574]: E0417 00:25:04.541840 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:04.780163 kubelet[2574]: E0417 00:25:04.750567 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:05.335843 kubelet[2574]: E0417 00:25:05.302964 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:06.349393 kubelet[2574]: E0417 00:25:06.288815 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:07.652489 kubelet[2574]: E0417 00:25:07.642209 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:08.302435 kubelet[2574]: E0417 00:25:08.300893 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:25:09.000604 kubelet[2574]: E0417 00:25:08.985001 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:10.457212 kubelet[2574]: E0417 00:25:10.453471 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:12.126616 kubelet[2574]: E0417 00:25:12.096726 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:25:18.701710 kubelet[2574]: E0417 00:25:16.247784 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 00:25:29.688263 kubelet[2574]: E0417 00:25:29.592217 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:25:53.885762 kubelet[2574]: E0417 00:25:49.564802 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:26:09.174484 kubelet[2574]: E0417 00:26:04.599860 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:26:17.509142 kubelet[2574]: E0417 00:26:14.116742 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:26:19.439955 kubelet[2574]: E0417 00:26:18.251524 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:26:33.288343 kubelet[2574]: E0417 00:26:29.445353 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:26:42.041385 kubelet[2574]: E0417 00:26:35.846598 2574 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:26:55.995633 kubelet[2574]: E0417 00:26:52.190255 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:27:05.694772 kubelet[2574]: E0417 00:27:05.573372 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:27:16.174785 kubelet[2574]: E0417 00:27:16.171765 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:27:19.215607 kubelet[2574]: E0417 00:27:19.183272 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:27:19.978980 kubelet[2574]: E0417 00:27:14.574542 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:27:30.292588 kubelet[2574]: E0417 00:27:30.148692 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349" Apr 17 00:27:31.904531 kubelet[2574]: E0417 00:27:26.877293 2574 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 17 00:27:36.356014 kubelet[2574]: E0417 00:27:36.349169 2574 kubelet.go:2997] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 17 00:27:37.804256 kubelet[2574]: E0417 00:27:32.865828 2574 container_log_manager.go:230] "Failed to get container status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" worker=1 containerID="87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349" Apr 17 00:27:40.790475 kubelet[2574]: E0417 00:27:34.469985 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:27:45.147982 kubelet[2574]: E0417 00:27:45.146439 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:27:51.051751 kubelet[2574]: E0417 00:27:50.970946 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:27:52.902706 kubelet[2574]: E0417 00:27:51.784273 2574 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:28:04.481851 kubelet[2574]: E0417 00:28:04.479763 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:28:08.195183 kubelet[2574]: E0417 00:28:05.696705 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m0.058019417s ago; threshold is 3m0s]" Apr 17 00:28:08.959197 kubelet[2574]: E0417 00:28:07.919641 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:28:09.508610 kubelet[2574]: E0417 00:28:09.380894 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:28:11.681794 kubelet[2574]: E0417 00:28:07.186682 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:28:13.087739 kubelet[2574]: E0417 00:28:13.077505 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:28:14.424887 kubelet[2574]: E0417 00:28:04.351902 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:28:18.132754 kubelet[2574]: E0417 00:28:16.615411 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m14.405872992s ago; threshold is 3m0s]" Apr 17 00:28:22.211591 kubelet[2574]: E0417 00:28:22.201162 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:28:22.976784 kubelet[2574]: E0417 00:28:02.816821 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.6:6443/api/v1/namespaces/default/events/localhost.18a6fd1fa7d4ec20\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1fa7d4ec20 default 46 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:35 +0000 UTC,LastTimestamp:2026-04-17 00:23:35.764079581 +0000 UTC m=+14.191781962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:28:31.400565 kubelet[2574]: E0417 00:28:30.213478 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m15.458269217s ago; threshold is 3m0s]" Apr 17 00:28:36.001376 kubelet[2574]: E0417 00:28:32.043988 2574 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:28:36.442973 kubelet[2574]: E0417 00:28:36.439449 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m26.237723055s ago; threshold is 3m0s" Apr 17 00:28:37.710862 kubelet[2574]: E0417 00:28:37.697614 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:28:44.172624 kubelet[2574]: E0417 00:28:41.482702 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:28:44.891463 containerd[1581]: time="2026-04-17T00:28:44.740029761Z" level=warning msg="container event discarded" container=36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6 type=CONTAINER_CREATED_EVENT Apr 17 00:28:45.557695 containerd[1581]: time="2026-04-17T00:28:45.080676075Z" level=warning msg="container event discarded" container=36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6 type=CONTAINER_STARTED_EVENT Apr 17 00:28:47.220649 kubelet[2574]: E0417 00:28:45.285451 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m33.03047501s ago; threshold is 3m0s" Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.277029042Z" level=warning msg="container event discarded" container=430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b type=CONTAINER_CREATED_EVENT Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.282709727Z" level=warning msg="container event discarded" container=430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b type=CONTAINER_STARTED_EVENT Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.287110966Z" level=warning msg="container event discarded" container=b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4 type=CONTAINER_CREATED_EVENT Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.287886376Z" level=warning msg="container event discarded" container=b5180993f8b45980db2d929842cd7295fba9194d13d6c0abe9174bb4cb4212c4 type=CONTAINER_STARTED_EVENT Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.287902302Z" level=warning msg="container event discarded" container=201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c type=CONTAINER_CREATED_EVENT Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.287970896Z" level=warning msg="container event discarded" container=14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542 type=CONTAINER_CREATED_EVENT Apr 17 00:28:47.309835 containerd[1581]: time="2026-04-17T00:28:47.288025875Z" level=warning msg="container event discarded" container=87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349 type=CONTAINER_CREATED_EVENT Apr 17 00:28:48.023009 containerd[1581]: time="2026-04-17T00:28:47.765446672Z" level=warning msg="container event discarded" container=201de210f214b753ecf1fba5fdc228a21cc69aba48050e1e5481bd2f4b241f5c type=CONTAINER_STARTED_EVENT Apr 17 00:28:48.701712 containerd[1581]: time="2026-04-17T00:28:48.483018044Z" level=warning msg="container event discarded" container=14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542 type=CONTAINER_STARTED_EVENT Apr 17 00:28:49.176667 kubelet[2574]: E0417 00:28:40.799537 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:28:50.191394 containerd[1581]: time="2026-04-17T00:28:49.978590387Z" level=warning msg="container event discarded" container=87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349 type=CONTAINER_STARTED_EVENT Apr 17 00:28:50.913339 kubelet[2574]: E0417 00:28:47.948584 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:28:56.811411 kubelet[2574]: E0417 00:28:53.903511 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:28:58.744768 kubelet[2574]: I0417 00:28:57.861178 2574 request.go:752] "Waited before sending request" logger="kubernetes.io/kube-apiserver-client-kubelet" delay="1.260911792s" reason="retries: 2, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcsr-fpdcz&resourceVersion=74&timeout=9m24s&timeoutSeconds=564&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcsr-fpdcz&resourceVersion=74&timeout=9m24s&timeoutSeconds=564&watch=true" Apr 17 00:29:03.186921 kubelet[2574]: E0417 00:28:58.585634 2574 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:29:04.610024 kubelet[2574]: E0417 00:29:02.817643 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m47.621871282s ago; threshold is 3m0s" Apr 17 00:29:08.940756 kubelet[2574]: E0417 00:29:08.916199 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:29:23.200487 kubelet[2574]: E0417 00:29:23.197643 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 4m0.454094163s ago; threshold is 3m0s" Apr 17 00:29:30.949847 kubelet[2574]: E0417 00:29:30.877568 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:29:35.378224 kubelet[2574]: E0417 00:29:25.214129 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:29:40.009931 kubelet[2574]: E0417 00:29:39.982858 2574 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 17 00:29:40.533705 kubelet[2574]: E0417 00:29:40.519598 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:29:46.908372 kubelet[2574]: E0417 00:29:43.987748 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m29.588825186s ago; threshold is 3m0s]" Apr 17 00:29:48.283841 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 17 00:29:54.396296 kubelet[2574]: E0417 00:29:21.444805 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:29:59.303624 systemd-tmpfiles[2888]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 00:29:59.303656 systemd-tmpfiles[2888]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 00:29:59.531602 systemd-tmpfiles[2888]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 00:30:00.004744 systemd-tmpfiles[2888]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 00:30:00.407032 systemd-tmpfiles[2888]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 00:30:00.752541 systemd-tmpfiles[2888]: ACLs are not supported, ignoring. Apr 17 00:30:00.875880 systemd-tmpfiles[2888]: ACLs are not supported, ignoring. Apr 17 00:30:03.455516 systemd-tmpfiles[2888]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:30:03.504657 systemd-tmpfiles[2888]: Skipping /boot Apr 17 00:30:05.028615 kubelet[2574]: E0417 00:30:05.028130 2574 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:30:05.028630 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 17 00:30:05.058704 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 17 00:30:05.357498 systemd[1]: systemd-tmpfiles-clean.service: Consumed 4.677s CPU time, 4.1M memory peak. Apr 17 00:30:06.515795 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Apr 17 00:30:13.878929 kubelet[2574]: E0417 00:30:08.593693 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 00:30:16.530632 kubelet[2574]: I0417 00:30:16.506686 2574 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 17 00:30:20.483645 kubelet[2574]: E0417 00:30:20.473431 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:30:25.315577 kubelet[2574]: E0417 00:30:22.292868 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:30:26.118000 kubelet[2574]: E0417 00:30:10.790541 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.6:6443/api/v1/namespaces/default/events/localhost.18a6fd1fa7d4ec20\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1fa7d4ec20 default 46 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:35 +0000 UTC,LastTimestamp:2026-04-17 00:23:35.764079581 +0000 UTC m=+14.191781962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:30:27.160801 kubelet[2574]: E0417 00:30:22.593628 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 00:30:27.547601 kubelet[2574]: W0417 00:30:25.863705 2574 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/system.slice/systemd-tmpfiles-clean.service: no such file or directory Apr 17 00:30:29.678715 kubelet[2574]: E0417 00:30:22.751428 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m44.87385154s ago; threshold is 3m0s]" Apr 17 00:30:31.972844 kubelet[2574]: E0417 00:30:30.041228 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:30:34.795312 kubelet[2574]: E0417 00:30:34.381784 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:30:38.996493 kubelet[2574]: E0417 00:30:35.019162 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:30:41.562866 kubelet[2574]: E0417 00:30:39.989757 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:30:49.163803 kubelet[2574]: E0417 00:30:46.680135 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 17 00:30:50.225768 kubelet[2574]: E0417 00:30:50.201884 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:30:53.880802 kubelet[2574]: E0417 00:30:53.865758 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:30:55.581626 kubelet[2574]: E0417 00:30:54.978823 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:30:56.838788 kubelet[2574]: E0417 00:30:56.366465 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:31:00.069602 kubelet[2574]: E0417 00:31:00.057480 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:31:01.416626 kubelet[2574]: E0417 00:31:01.415635 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:31:07.434156 kubelet[2574]: E0417 00:31:07.432605 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:31:09.610013 kubelet[2574]: E0417 00:31:09.579381 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 17 00:31:11.446628 kubelet[2574]: E0417 00:31:10.480779 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:31:14.493400 kubelet[2574]: E0417 00:31:12.783878 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:31:20.789805 kubelet[2574]: E0417 00:31:20.765975 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:31:23.763507 kubelet[2574]: E0417 00:31:23.754932 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:31:27.877134 kubelet[2574]: E0417 00:31:26.556578 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m1.933665335s ago; threshold is 3m0s" Apr 17 00:31:29.689989 kubelet[2574]: E0417 00:31:28.869593 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m6.013792665s ago; threshold is 3m0s" Apr 17 00:31:30.887700 kubelet[2574]: E0417 00:31:28.866749 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:31:31.966059 kubelet[2574]: E0417 00:31:31.847672 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:31:34.268948 kubelet[2574]: E0417 00:31:33.006328 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m8.412909629s ago; threshold is 3m0s" Apr 17 00:31:34.989305 kubelet[2574]: E0417 00:31:32.789766 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 17 00:31:37.884593 kubelet[2574]: E0417 00:31:36.878669 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m12.054693991s ago; threshold is 3m0s" Apr 17 00:31:42.808478 kubelet[2574]: E0417 00:31:40.092647 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m17.212227109s ago; threshold is 3m0s" Apr 17 00:31:46.561563 kubelet[2574]: E0417 00:31:29.094983 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.6:6443/api/v1/namespaces/default/events/localhost.18a6fd1fa7d4ec20\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1fa7d4ec20 default 46 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:35 +0000 UTC,LastTimestamp:2026-04-17 00:23:35.764079581 +0000 UTC m=+14.191781962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:31:46.982913 kubelet[2574]: E0417 00:31:46.559882 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:31:50.831331 kubelet[2574]: E0417 00:31:48.614889 2574 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 17 00:31:53.419662 kubelet[2574]: E0417 00:31:53.274721 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m23.181403121s ago; threshold is 3m0s" Apr 17 00:31:53.665530 kubelet[2574]: E0417 00:31:53.566953 2574 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 17 00:31:58.772921 kubelet[2574]: E0417 00:31:56.805490 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m33.919573453s ago; threshold is 3m0s" Apr 17 00:31:59.816663 kubelet[2574]: E0417 00:31:59.350905 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 17 00:32:02.154692 kubelet[2574]: E0417 00:32:02.148919 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:32:03.351268 kubelet[2574]: E0417 00:32:03.348918 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 00:32:05.535271 kubelet[2574]: E0417 00:32:05.529114 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:32:07.621919 kubelet[2574]: I0417 00:32:02.411886 2574 request.go:752] "Waited before sending request" delay="2.605854906s" reason="client-side throttling, not priority and fairness" verb="PATCH" URL="https://10.0.0.6:6443/api/v1/namespaces/default/events/localhost.18a6fd1fa7d4ec20" Apr 17 00:32:08.866672 kubelet[2574]: E0417 00:32:00.079801 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:32:11.396374 kubelet[2574]: E0417 00:32:08.619295 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:32:15.059502 kubelet[2574]: E0417 00:32:15.052647 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:32:15.295431 kubelet[2574]: E0417 00:32:14.387817 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:32:21.370620 kubelet[2574]: E0417 00:32:19.574531 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:32:22.688731 kubelet[2574]: E0417 00:32:22.681896 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:32:24.012351 kubelet[2574]: E0417 00:32:23.985386 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:32:25.024711 kubelet[2574]: E0417 00:32:24.956768 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="3.2s" Apr 17 00:33:06.488473 kubelet[2574]: E0417 00:33:03.687923 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:33:18.868628 kubelet[2574]: E0417 00:33:15.215781 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:33:22.517126 kubelet[2574]: E0417 00:33:07.184714 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:33:25.954962 kubelet[2574]: E0417 00:33:25.947775 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:33:29.010798 kubelet[2574]: E0417 00:33:22.311024 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:33:33.837732 kubelet[2574]: E0417 00:33:29.617948 2574 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 17 00:33:52.046961 kubelet[2574]: E0417 00:33:47.954753 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m0.332229007s ago; threshold is 3m0s]" Apr 17 00:33:56.856029 kubelet[2574]: E0417 00:33:53.919022 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:33:58.509404 kubelet[2574]: E0417 00:33:50.551027 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:34:06.519649 kubelet[2574]: E0417 00:34:00.706543 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m21.371301189s ago; threshold is 3m0s]" Apr 17 00:34:07.312501 kubelet[2574]: E0417 00:34:07.060942 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:34:09.984363 kubelet[2574]: E0417 00:34:09.978856 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:34:17.545638 kubelet[2574]: E0417 00:34:16.089830 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m34.758123332s ago; threshold is 3m0s]" Apr 17 00:34:18.665891 containerd[1581]: time="2026-04-17T00:34:18.664905177Z" level=info msg="StopContainer for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" with timeout 30 (s)" Apr 17 00:34:19.267902 kubelet[2574]: E0417 00:34:17.483559 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:34:20.355410 kubelet[2574]: E0417 00:34:05.990853 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.6:6443/api/v1/namespaces/default/events/localhost.18a6fd1fa7d4ec20\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1fa7d4ec20 default 46 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:35 +0000 UTC,LastTimestamp:2026-04-17 00:23:35.764079581 +0000 UTC m=+14.191781962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:34:21.959967 containerd[1581]: time="2026-04-17T00:34:21.947478257Z" level=info msg="Stop container \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" with signal terminated" Apr 17 00:34:24.803332 kubelet[2574]: E0417 00:34:24.761650 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:34:26.368188 kubelet[2574]: E0417 00:34:24.408949 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m47.240651629s ago; threshold is 3m0s" Apr 17 00:34:31.277012 kubelet[2574]: E0417 00:34:25.076446 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: i/o timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:34:39.518234 kubelet[2574]: E0417 00:34:35.800992 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m54.534888247s ago; threshold is 3m0s" Apr 17 00:34:40.878246 kubelet[2574]: E0417 00:34:40.873722 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:34:43.091320 kubelet[2574]: E0417 00:34:38.779007 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:34:44.802964 kubelet[2574]: E0417 00:34:40.862765 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Apr 17 00:34:48.508262 kubelet[2574]: E0417 00:34:48.464764 2574 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 4m9.868582494s ago; threshold is 3m0s" Apr 17 00:34:50.375879 kubelet[2574]: E0417 00:34:45.318952 2574 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 17 00:34:54.057617 systemd[1]: cri-containerd-14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542.scope: Deactivated successfully. Apr 17 00:34:54.086303 systemd[1]: cri-containerd-14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542.scope: Consumed 50.680s CPU time, 23.6M memory peak. Apr 17 00:34:56.915029 kubelet[2574]: E0417 00:34:56.890020 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m18.199881677s ago; threshold is 3m0s]" Apr 17 00:35:02.006666 containerd[1581]: time="2026-04-17T00:35:02.001398811Z" level=info msg="received container exit event container_id:\"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" id:\"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" pid:2797 exited_at:{seconds:1776386096 nanos:485776408}" Apr 17 00:35:05.710994 kubelet[2574]: E0417 00:35:03.157210 2574 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 00:35:07.000651 kubelet[2574]: E0417 00:35:06.655736 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 00:35:07.469358 kubelet[2574]: E0417 00:35:05.881636 2574 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m26.251419044s ago; threshold is 3m0s]" Apr 17 00:35:09.813431 kubelet[2574]: E0417 00:35:09.805792 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:35:10.392305 kubelet[2574]: E0417 00:35:10.386018 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:35:10.711947 kubelet[2574]: E0417 00:35:10.691151 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:35:11.115603 kubelet[2574]: E0417 00:35:11.113949 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:35:11.314301 kubelet[2574]: E0417 00:35:10.829639 2574 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.6:6443/api/v1/namespaces/default/events/localhost.18a6fd1fa7d4ec20\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6fd1fa7d4ec20 default 46 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:23:35 +0000 UTC,LastTimestamp:2026-04-17 00:23:35.764079581 +0000 UTC m=+14.191781962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:35:11.314301 kubelet[2574]: E0417 00:35:11.309214 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:35:11.320432 kubelet[2574]: E0417 00:35:11.315452 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 17 00:35:11.711489 kubelet[2574]: E0417 00:35:11.711171 2574 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 00:35:11.818157 kubelet[2574]: E0417 00:35:11.816247 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:11.825983 containerd[1581]: time="2026-04-17T00:35:11.815743839Z" level=info msg="StopContainer for \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" with timeout 30 (s)" Apr 17 00:35:11.902980 containerd[1581]: time="2026-04-17T00:35:11.895285057Z" level=info msg="Stop container \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" with signal terminated" Apr 17 00:35:12.846773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542-rootfs.mount: Deactivated successfully. Apr 17 00:35:13.068239 containerd[1581]: time="2026-04-17T00:35:13.048358503Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 17 00:35:13.298989 containerd[1581]: time="2026-04-17T00:35:13.264876548Z" level=error msg="failed to handle container TaskExit event container_id:\"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" id:\"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" pid:2797 exited_at:{seconds:1776386096 nanos:485776408}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 17 00:35:14.417169 systemd[1]: cri-containerd-87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349.scope: Deactivated successfully. Apr 17 00:35:14.439525 systemd[1]: cri-containerd-87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349.scope: Consumed 44.389s CPU time, 24.1M memory peak. Apr 17 00:35:14.514364 containerd[1581]: time="2026-04-17T00:35:14.511953273Z" level=info msg="received container exit event container_id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" pid:2814 exit_status:2 exited_at:{seconds:1776386114 nanos:511199420}" Apr 17 00:35:14.897749 kubelet[2574]: I0417 00:35:14.893535 2574 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 00:35:14.922858 containerd[1581]: time="2026-04-17T00:35:14.893954892Z" level=info msg="TaskExit event container_id:\"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" id:\"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" pid:2797 exited_at:{seconds:1776386096 nanos:485776408}" Apr 17 00:35:15.420348 containerd[1581]: time="2026-04-17T00:35:15.417985075Z" level=info msg="Ensure that container 14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542 in task-service has been cleanup successfully" Apr 17 00:35:15.660978 containerd[1581]: time="2026-04-17T00:35:15.621248656Z" level=info msg="StopContainer for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" returns successfully" Apr 17 00:35:15.856338 kubelet[2574]: I0417 00:35:15.852190 2574 apiserver.go:52] "Watching apiserver" Apr 17 00:35:15.911237 kubelet[2574]: E0417 00:35:15.910700 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:17.506974 kubelet[2574]: I0417 00:35:17.431741 2574 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 00:35:18.067876 kubelet[2574]: E0417 00:35:18.066462 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:35:19.005143 kubelet[2574]: I0417 00:35:19.003570 2574 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 00:35:26.224177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349-rootfs.mount: Deactivated successfully. Apr 17 00:35:26.757481 kubelet[2574]: I0417 00:35:26.011376 2574 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 00:35:26.896883 containerd[1581]: time="2026-04-17T00:35:25.432825021Z" level=error msg="failed to handle container TaskExit event container_id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" pid:2814 exit_status:2 exited_at:{seconds:1776386114 nanos:511199420}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 17 00:35:27.074931 containerd[1581]: time="2026-04-17T00:35:26.907338855Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 17 00:35:27.701827 kubelet[2574]: E0417 00:35:27.701432 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:35:28.301906 containerd[1581]: time="2026-04-17T00:35:28.220911153Z" level=info msg="CreateContainer within sandbox \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 00:35:28.943353 containerd[1581]: time="2026-04-17T00:35:28.942548103Z" level=info msg="TaskExit event container_id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" pid:2814 exit_status:2 exited_at:{seconds:1776386114 nanos:511199420}" Apr 17 00:35:39.209919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582012891.mount: Deactivated successfully. Apr 17 00:35:39.709756 containerd[1581]: time="2026-04-17T00:35:39.708326932Z" level=error msg="Failed to handle backOff event container_id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" pid:2814 exit_status:2 exited_at:{seconds:1776386114 nanos:511199420} for 87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 17 00:35:40.106513 containerd[1581]: time="2026-04-17T00:35:40.089077467Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 17 00:35:40.768324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919942521.mount: Deactivated successfully. Apr 17 00:35:40.881578 containerd[1581]: time="2026-04-17T00:35:40.874692843Z" level=info msg="Container 6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:35:42.272012 containerd[1581]: time="2026-04-17T00:35:42.240862777Z" level=info msg="TaskExit event container_id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" id:\"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" pid:2814 exit_status:2 exited_at:{seconds:1776386114 nanos:511199420}" Apr 17 00:35:44.115727 kubelet[2574]: E0417 00:35:44.114646 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:35:44.160548 kubelet[2574]: E0417 00:35:44.142460 2574 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.863s" Apr 17 00:35:44.404794 containerd[1581]: time="2026-04-17T00:35:44.400449127Z" level=info msg="CreateContainer within sandbox \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\"" Apr 17 00:35:44.453744 containerd[1581]: time="2026-04-17T00:35:44.453403051Z" level=info msg="StartContainer for \"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\"" Apr 17 00:35:44.593144 containerd[1581]: time="2026-04-17T00:35:44.584903231Z" level=info msg="Kill container \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\"" Apr 17 00:35:46.103770 kubelet[2574]: E0417 00:35:46.101514 2574 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.905s" Apr 17 00:35:46.514531 containerd[1581]: time="2026-04-17T00:35:46.506313813Z" level=info msg="connecting to shim 6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690" address="unix:///run/containerd/s/1fd8616eb94714c1343e38db73f43c8dc81fb060bb133b4d75aee2a1063347ae" protocol=ttrpc version=3 Apr 17 00:35:46.847459 kubelet[2574]: E0417 00:35:46.844576 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:47.482809 kubelet[2574]: I0417 00:35:47.473939 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=32.473300478 podStartE2EDuration="32.473300478s" podCreationTimestamp="2026-04-17 00:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:35:44.918330329 +0000 UTC m=+743.346032701" watchObservedRunningTime="2026-04-17 00:35:47.473300478 +0000 UTC m=+745.901002871" Apr 17 00:35:47.506453 kubelet[2574]: I0417 00:35:47.485239 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=29.484793453 podStartE2EDuration="29.484793453s" podCreationTimestamp="2026-04-17 00:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:35:47.472511949 +0000 UTC m=+745.900214335" watchObservedRunningTime="2026-04-17 00:35:47.484793453 +0000 UTC m=+745.912495838" Apr 17 00:35:47.769797 containerd[1581]: time="2026-04-17T00:35:47.757888627Z" level=info msg="StopContainer for \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" returns successfully" Apr 17 00:35:47.910651 kubelet[2574]: E0417 00:35:47.904424 2574 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.691s" Apr 17 00:35:47.910651 kubelet[2574]: E0417 00:35:47.909556 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:48.692836 systemd[1]: Started cri-containerd-6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690.scope - libcontainer container 6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690. Apr 17 00:35:49.019511 containerd[1581]: time="2026-04-17T00:35:49.014437816Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 00:35:49.438615 kubelet[2574]: E0417 00:35:49.428923 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:35:49.683618 containerd[1581]: time="2026-04-17T00:35:49.682360786Z" level=info msg="Container 0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:35:49.894637 containerd[1581]: time="2026-04-17T00:35:49.890203802Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\"" Apr 17 00:35:49.942430 containerd[1581]: time="2026-04-17T00:35:49.939955277Z" level=info msg="StartContainer for \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\"" Apr 17 00:35:50.025538 containerd[1581]: time="2026-04-17T00:35:50.025245875Z" level=info msg="connecting to shim 0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221" address="unix:///run/containerd/s/86aa2eab26b6677f4e45080413338db26ee4889c5f13e2d0c1dda527286a34b1" protocol=ttrpc version=3 Apr 17 00:35:50.295528 containerd[1581]: time="2026-04-17T00:35:50.292007368Z" level=info msg="StartContainer for \"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" returns successfully" Apr 17 00:35:51.289571 systemd[1]: Started cri-containerd-0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221.scope - libcontainer container 0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221. Apr 17 00:35:51.785018 kubelet[2574]: E0417 00:35:51.782836 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:52.223374 kubelet[2574]: I0417 00:35:52.222446 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=25.221938251 podStartE2EDuration="25.221938251s" podCreationTimestamp="2026-04-17 00:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:35:48.885987248 +0000 UTC m=+747.313689624" watchObservedRunningTime="2026-04-17 00:35:52.221938251 +0000 UTC m=+750.649640628" Apr 17 00:35:52.420290 containerd[1581]: time="2026-04-17T00:35:52.419016018Z" level=info msg="StartContainer for \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\" returns successfully" Apr 17 00:35:53.746626 kubelet[2574]: E0417 00:35:53.746158 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:53.754643 kubelet[2574]: E0417 00:35:53.751864 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:54.522308 kubelet[2574]: E0417 00:35:54.521251 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:35:54.918476 kubelet[2574]: E0417 00:35:54.916479 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:56.153603 kubelet[2574]: E0417 00:35:56.153160 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:35:59.660020 kubelet[2574]: E0417 00:35:59.652860 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:36:08.608453 systemd[1]: cri-containerd-0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221.scope: Deactivated successfully. Apr 17 00:36:08.762598 systemd[1]: cri-containerd-0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221.scope: Consumed 2.582s CPU time, 17.7M memory peak. Apr 17 00:36:09.212221 containerd[1581]: time="2026-04-17T00:36:09.197357984Z" level=info msg="received container exit event container_id:\"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\" id:\"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\" pid:3022 exit_status:1 exited_at:{seconds:1776386169 nanos:92444178}" Apr 17 00:36:09.306657 kubelet[2574]: E0417 00:36:09.303779 2574 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:36:09.386652 kubelet[2574]: E0417 00:36:09.386419 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:09.712680 kubelet[2574]: E0417 00:36:09.710166 2574 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.206s" Apr 17 00:36:09.737460 kubelet[2574]: E0417 00:36:09.736211 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:10.458484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221-rootfs.mount: Deactivated successfully. Apr 17 00:36:10.602367 kubelet[2574]: E0417 00:36:10.600711 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:11.742020 kubelet[2574]: I0417 00:36:11.741626 2574 scope.go:117] "RemoveContainer" containerID="87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349" Apr 17 00:36:11.770367 kubelet[2574]: I0417 00:36:11.766596 2574 scope.go:117] "RemoveContainer" containerID="0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221" Apr 17 00:36:11.782360 kubelet[2574]: E0417 00:36:11.782006 2574 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:11.915188 containerd[1581]: time="2026-04-17T00:36:11.914488023Z" level=info msg="RemoveContainer for \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\"" Apr 17 00:36:11.977498 containerd[1581]: time="2026-04-17T00:36:11.976668281Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 17 00:36:11.990306 containerd[1581]: time="2026-04-17T00:36:11.988540294Z" level=info msg="RemoveContainer for \"87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349\" returns successfully" Apr 17 00:36:12.062933 containerd[1581]: time="2026-04-17T00:36:12.061312095Z" level=info msg="Container 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:36:12.133927 kubelet[2574]: I0417 00:36:12.132815 2574 scope.go:117] "RemoveContainer" containerID="0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221" Apr 17 00:36:12.151144 containerd[1581]: time="2026-04-17T00:36:12.149946078Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\"" Apr 17 00:36:12.186125 containerd[1581]: time="2026-04-17T00:36:12.185453922Z" level=info msg="StartContainer for \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\"" Apr 17 00:36:12.242873 containerd[1581]: time="2026-04-17T00:36:12.242408010Z" level=info msg="connecting to shim 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f" address="unix:///run/containerd/s/86aa2eab26b6677f4e45080413338db26ee4889c5f13e2d0c1dda527286a34b1" protocol=ttrpc version=3 Apr 17 00:36:12.245886 containerd[1581]: time="2026-04-17T00:36:12.245789736Z" level=info msg="RemoveContainer for \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\"" Apr 17 00:36:12.506251 systemd[1]: Started cri-containerd-371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f.scope - libcontainer container 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f. Apr 17 00:36:12.768229 systemd[1]: Reload requested from client PID 3080 ('systemctl') (unit session-7.scope)... Apr 17 00:36:12.773396 systemd[1]: Reloading... Apr 17 00:36:13.085945 containerd[1581]: time="2026-04-17T00:36:13.084948063Z" level=error msg="ContainerStatus for \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\": not found" Apr 17 00:36:13.124310 kubelet[2574]: E0417 00:36:13.123916 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\": not found" containerID="0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221" Apr 17 00:36:13.130404 containerd[1581]: time="2026-04-17T00:36:13.124989400Z" level=info msg="RemoveContainer for \"0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221\" returns successfully" Apr 17 00:36:13.156853 containerd[1581]: time="2026-04-17T00:36:13.156666323Z" level=info msg="StartContainer for \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" returns successfully" Apr 17 00:36:13.331341 zram_generator::config[3140]: No configuration found. Apr 17 00:36:14.125224 systemd[1]: Reloading finished in 1348 ms. Apr 17 00:36:14.308632 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:36:14.440260 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 00:36:14.441470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:36:14.441874 systemd[1]: kubelet.service: Consumed 6min 52.648s CPU time, 135.4M memory peak. Apr 17 00:36:14.463795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:36:16.090508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:36:16.114261 (kubelet)[3181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:36:16.554203 kubelet[3181]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 00:36:16.554203 kubelet[3181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:36:16.554203 kubelet[3181]: I0417 00:36:16.553478 3181 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 00:36:16.600136 kubelet[3181]: I0417 00:36:16.599261 3181 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 00:36:16.600136 kubelet[3181]: I0417 00:36:16.599501 3181 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 00:36:16.600136 kubelet[3181]: I0417 00:36:16.599982 3181 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 00:36:16.600136 kubelet[3181]: I0417 00:36:16.600119 3181 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 00:36:16.603646 kubelet[3181]: I0417 00:36:16.600831 3181 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 00:36:16.604312 kubelet[3181]: I0417 00:36:16.604173 3181 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 00:36:16.608243 kubelet[3181]: I0417 00:36:16.608130 3181 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 00:36:16.841958 kubelet[3181]: I0417 00:36:16.840193 3181 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 00:36:16.857792 kubelet[3181]: I0417 00:36:16.857590 3181 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 00:36:16.858158 kubelet[3181]: I0417 00:36:16.858124 3181 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 00:36:16.871376 kubelet[3181]: I0417 00:36:16.860957 3181 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 00:36:16.875514 kubelet[3181]: I0417 00:36:16.872648 3181 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 00:36:16.875514 kubelet[3181]: I0417 00:36:16.873024 3181 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 00:36:16.875514 kubelet[3181]: I0417 00:36:16.873933 3181 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 00:36:16.891769 kubelet[3181]: I0417 00:36:16.890942 3181 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:36:16.906782 kubelet[3181]: I0417 00:36:16.905927 3181 kubelet.go:475] "Attempting to sync node with API server" Apr 17 00:36:16.913151 kubelet[3181]: I0417 00:36:16.909016 3181 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 00:36:16.913151 kubelet[3181]: I0417 00:36:16.912935 3181 kubelet.go:387] "Adding apiserver pod source" Apr 17 00:36:16.913444 kubelet[3181]: I0417 00:36:16.913204 3181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 00:36:16.962696 kubelet[3181]: I0417 00:36:16.961948 3181 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 00:36:16.971254 kubelet[3181]: I0417 00:36:16.970876 3181 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 00:36:16.971254 kubelet[3181]: I0417 00:36:16.971168 3181 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 00:36:16.993228 kubelet[3181]: I0417 00:36:16.991631 3181 server.go:1262] "Started kubelet" Apr 17 00:36:16.994793 kubelet[3181]: I0417 00:36:16.994775 3181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 00:36:17.002390 kubelet[3181]: I0417 00:36:17.001852 3181 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 00:36:17.008235 kubelet[3181]: I0417 00:36:17.008214 3181 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 00:36:17.022311 kubelet[3181]: I0417 00:36:17.020302 3181 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 00:36:17.107573 kubelet[3181]: I0417 00:36:17.104703 3181 reconciler.go:29] "Reconciler: start to sync state" Apr 17 00:36:17.107573 kubelet[3181]: I0417 00:36:17.105135 3181 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 00:36:17.107573 kubelet[3181]: I0417 00:36:17.105594 3181 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 00:36:17.109860 kubelet[3181]: I0417 00:36:17.108455 3181 server.go:310] "Adding debug handlers to kubelet server" Apr 17 00:36:17.119418 kubelet[3181]: I0417 00:36:17.118010 3181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 00:36:17.126374 kubelet[3181]: I0417 00:36:17.126160 3181 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 00:36:17.128008 kubelet[3181]: I0417 00:36:17.127931 3181 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 00:36:17.140421 kubelet[3181]: E0417 00:36:17.139965 3181 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 00:36:17.141180 kubelet[3181]: I0417 00:36:17.141020 3181 factory.go:223] Registration of the containerd container factory successfully Apr 17 00:36:17.141256 kubelet[3181]: I0417 00:36:17.141234 3181 factory.go:223] Registration of the systemd container factory successfully Apr 17 00:36:17.281025 kubelet[3181]: I0417 00:36:17.279837 3181 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 00:36:17.287021 kubelet[3181]: I0417 00:36:17.286818 3181 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 00:36:17.288147 kubelet[3181]: I0417 00:36:17.287937 3181 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 00:36:17.290847 kubelet[3181]: I0417 00:36:17.290787 3181 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 00:36:17.291115 kubelet[3181]: E0417 00:36:17.290864 3181 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:36:17.403354 kubelet[3181]: E0417 00:36:17.394949 3181 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 00:36:17.607428 kubelet[3181]: E0417 00:36:17.605789 3181 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 00:36:17.733657 kubelet[3181]: I0417 00:36:17.727779 3181 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 00:36:17.733657 kubelet[3181]: I0417 00:36:17.731477 3181 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 00:36:17.733657 kubelet[3181]: I0417 00:36:17.732119 3181 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.746910 3181 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.747164 3181 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.747319 3181 policy_none.go:49] "None policy: Start" Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.747491 3181 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.747553 3181 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.748216 3181 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 00:36:17.748413 kubelet[3181]: I0417 00:36:17.748247 3181 policy_none.go:47] "Start" Apr 17 00:36:17.919893 kubelet[3181]: I0417 00:36:17.917375 3181 apiserver.go:52] "Watching apiserver" Apr 17 00:36:18.000965 kubelet[3181]: E0417 00:36:18.000318 3181 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 00:36:18.006566 kubelet[3181]: I0417 00:36:18.005801 3181 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 00:36:18.006566 kubelet[3181]: I0417 00:36:18.006295 3181 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 00:36:18.018887 kubelet[3181]: I0417 00:36:18.015812 3181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 00:36:18.080993 kubelet[3181]: I0417 00:36:18.078869 3181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 00:36:18.181481 kubelet[3181]: I0417 00:36:18.180963 3181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.202616 kubelet[3181]: E0417 00:36:18.186014 3181 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 00:36:18.210940 kubelet[3181]: I0417 00:36:18.210337 3181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 00:36:18.285936 kubelet[3181]: I0417 00:36:18.282876 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.285936 kubelet[3181]: I0417 00:36:18.283125 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.285936 kubelet[3181]: I0417 00:36:18.283158 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e54c01254a8f7ce80e4d0140bee4bbdd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e54c01254a8f7ce80e4d0140bee4bbdd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 00:36:18.285936 kubelet[3181]: I0417 00:36:18.283169 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.285936 kubelet[3181]: I0417 00:36:18.283214 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.291346 kubelet[3181]: I0417 00:36:18.283228 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 00:36:18.291346 kubelet[3181]: I0417 00:36:18.283237 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e54c01254a8f7ce80e4d0140bee4bbdd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e54c01254a8f7ce80e4d0140bee4bbdd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 00:36:18.291346 kubelet[3181]: I0417 00:36:18.283249 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e54c01254a8f7ce80e4d0140bee4bbdd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e54c01254a8f7ce80e4d0140bee4bbdd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 00:36:18.291346 kubelet[3181]: I0417 00:36:18.283297 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.387503 kubelet[3181]: E0417 00:36:18.387276 3181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 00:36:18.389830 kubelet[3181]: E0417 00:36:18.389236 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:18.389830 kubelet[3181]: E0417 00:36:18.387300 3181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 00:36:18.389830 kubelet[3181]: E0417 00:36:18.389446 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:18.547922 kubelet[3181]: I0417 00:36:18.546334 3181 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 00:36:18.638425 kubelet[3181]: E0417 00:36:18.633489 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:18.848352 kubelet[3181]: E0417 00:36:18.848247 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:18.848908 kubelet[3181]: E0417 00:36:18.848575 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:19.009254 kubelet[3181]: I0417 00:36:19.007234 3181 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 00:36:19.009254 kubelet[3181]: I0417 00:36:19.008154 3181 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 00:36:19.876550 kubelet[3181]: E0417 00:36:19.873014 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:19.876550 kubelet[3181]: E0417 00:36:19.876117 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:20.641358 kubelet[3181]: E0417 00:36:20.640671 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:20.927272 kubelet[3181]: E0417 00:36:20.925992 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:36:23.954895 sudo[1777]: pam_unix(sudo:session): session closed for user root Apr 17 00:36:23.967511 sshd[1776]: Connection closed by 10.0.0.1 port 52436 Apr 17 00:36:23.981463 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Apr 17 00:36:24.079613 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:52436.service: Deactivated successfully. Apr 17 00:36:24.336963 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 00:36:24.342668 systemd[1]: session-7.scope: Consumed 1min 45.768s CPU time, 235.2M memory peak. Apr 17 00:36:24.670817 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Apr 17 00:36:24.883870 systemd-logind[1520]: Removed session 7. Apr 17 00:36:43.030345 kubelet[3181]: E0417 00:36:43.029914 3181 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.72s" Apr 17 00:37:00.890589 systemd[1]: cri-containerd-371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f.scope: Deactivated successfully. Apr 17 00:37:00.915192 systemd[1]: cri-containerd-371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f.scope: Consumed 15.027s CPU time, 22.1M memory peak. Apr 17 00:37:04.485501 containerd[1581]: time="2026-04-17T00:37:04.153362378Z" level=info msg="received container exit event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625}" Apr 17 00:37:10.130820 systemd[1]: cri-containerd-6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690.scope: Deactivated successfully. Apr 17 00:37:10.347356 systemd[1]: cri-containerd-6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690.scope: Consumed 10.305s CPU time, 19M memory peak. Apr 17 00:37:13.389435 kubelet[3181]: E0417 00:37:13.361250 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:37:16.868841 containerd[1581]: time="2026-04-17T00:37:16.485624217Z" level=info msg="received container exit event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965}" Apr 17 00:37:17.334564 containerd[1581]: time="2026-04-17T00:37:17.316167526Z" level=error msg="get state for 430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b" error="context deadline exceeded" Apr 17 00:37:17.453411 containerd[1581]: time="2026-04-17T00:37:17.060821666Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 17 00:37:17.578014 containerd[1581]: time="2026-04-17T00:37:17.567466591Z" level=warning msg="unknown status" status=0 Apr 17 00:37:18.964221 containerd[1581]: time="2026-04-17T00:37:18.907692258Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 17 00:37:19.492712 containerd[1581]: time="2026-04-17T00:37:19.180779332Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 17 00:37:19.668626 containerd[1581]: time="2026-04-17T00:37:19.665696860Z" level=error msg="failed to handle container TaskExit event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625}" error="failed to stop container: context deadline exceeded" Apr 17 00:37:22.360523 containerd[1581]: time="2026-04-17T00:37:22.356503729Z" level=info msg="TaskExit event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625}" Apr 17 00:37:27.569292 containerd[1581]: time="2026-04-17T00:37:27.568597795Z" level=error msg="failed to handle container TaskExit event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965}" error="failed to stop container: context deadline exceeded" Apr 17 00:37:29.217164 containerd[1581]: time="2026-04-17T00:37:29.092965091Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 17 00:37:29.836761 containerd[1581]: time="2026-04-17T00:37:29.699516873Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 17 00:37:33.377616 containerd[1581]: time="2026-04-17T00:37:33.375725374Z" level=error msg="Failed to handle backOff event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625} for 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 17 00:37:33.377616 containerd[1581]: time="2026-04-17T00:37:33.376029705Z" level=info msg="TaskExit event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965}" Apr 17 00:37:33.377616 containerd[1581]: time="2026-04-17T00:37:33.377397973Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 17 00:37:33.377616 containerd[1581]: time="2026-04-17T00:37:33.377418904Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 17 00:37:33.603183 kubelet[3181]: E0417 00:37:33.602221 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:37:34.265494 kubelet[3181]: E0417 00:37:34.264986 3181 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 17 00:37:40.417665 kubelet[3181]: E0417 00:37:39.835563 3181 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="50.31s" Apr 17 00:37:47.351806 containerd[1581]: time="2026-04-17T00:37:46.972884046Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 17 00:37:48.030718 containerd[1581]: time="2026-04-17T00:37:48.001785831Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 17 00:37:48.402770 containerd[1581]: time="2026-04-17T00:37:48.027658146Z" level=error msg="Failed to handle backOff event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965} for 6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 17 00:37:49.277020 containerd[1581]: time="2026-04-17T00:37:48.884314124Z" level=info msg="TaskExit event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625}" Apr 17 00:37:53.354840 containerd[1581]: time="2026-04-17T00:37:53.350140409Z" level=error msg="get state for 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f" error="context deadline exceeded" Apr 17 00:37:53.354840 containerd[1581]: time="2026-04-17T00:37:53.351352968Z" level=warning msg="unknown status" status=0 Apr 17 00:37:54.350392 containerd[1581]: time="2026-04-17T00:37:53.350805569Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 17 00:37:56.926693 kubelet[3181]: E0417 00:37:56.923231 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:37:59.735339 containerd[1581]: time="2026-04-17T00:37:59.664635093Z" level=error msg="Failed to handle backOff event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625} for 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 17 00:38:00.317135 containerd[1581]: time="2026-04-17T00:38:00.316297419Z" level=info msg="TaskExit event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965}" Apr 17 00:38:06.369936 containerd[1581]: time="2026-04-17T00:38:06.364917684Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 17 00:38:06.506651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f-rootfs.mount: Deactivated successfully. Apr 17 00:38:13.981559 containerd[1581]: time="2026-04-17T00:38:13.874532421Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 17 00:38:14.482896 containerd[1581]: time="2026-04-17T00:38:14.479962878Z" level=error msg="Failed to handle backOff event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965} for 6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 17 00:38:14.956878 containerd[1581]: time="2026-04-17T00:38:14.646871727Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 17 00:38:15.510786 containerd[1581]: time="2026-04-17T00:38:15.174521259Z" level=info msg="TaskExit event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625}" Apr 17 00:38:15.701266 kubelet[3181]: E0417 00:38:15.680822 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:38:15.867675 kubelet[3181]: I0417 00:38:15.691920 3181 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 17 00:38:25.643392 containerd[1581]: time="2026-04-17T00:38:25.626948231Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 17 00:38:26.340534 containerd[1581]: time="2026-04-17T00:38:26.224627641Z" level=error msg="Failed to handle backOff event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625} for 371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 17 00:38:26.558218 containerd[1581]: time="2026-04-17T00:38:26.549867253Z" level=info msg="TaskExit event container_id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" id:\"6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690\" pid:2987 exit_status:1 exited_at:{seconds:1776386233 nanos:369559965}" Apr 17 00:38:34.433500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690-rootfs.mount: Deactivated successfully. Apr 17 00:38:39.038538 containerd[1581]: time="2026-04-17T00:38:39.037982477Z" level=info msg="TaskExit event container_id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" id:\"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" pid:3071 exit_status:1 exited_at:{seconds:1776386223 nanos:2493625}" Apr 17 00:38:39.497475 kubelet[3181]: E0417 00:38:39.495758 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 17 00:38:39.764890 kubelet[3181]: I0417 00:38:39.756843 3181 scope.go:117] "RemoveContainer" containerID="14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542" Apr 17 00:38:40.245834 kubelet[3181]: E0417 00:38:40.243930 3181 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="57.359s" Apr 17 00:38:42.103982 containerd[1581]: time="2026-04-17T00:38:42.088392345Z" level=info msg="RemoveContainer for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\"" Apr 17 00:38:43.650989 containerd[1581]: time="2026-04-17T00:38:43.619653705Z" level=info msg="StopContainer for \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" with timeout 30 (s)" Apr 17 00:38:43.742796 kubelet[3181]: E0417 00:38:43.741992 3181 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 17 00:38:43.889459 containerd[1581]: time="2026-04-17T00:38:43.744494477Z" level=info msg="Container to stop \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 00:38:44.793031 containerd[1581]: time="2026-04-17T00:38:44.792713917Z" level=info msg="StopContainer for \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" returns successfully" Apr 17 00:38:45.136406 containerd[1581]: time="2026-04-17T00:38:45.124714693Z" level=info msg="RemoveContainer for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" returns successfully" Apr 17 00:38:45.176811 kubelet[3181]: E0417 00:38:45.163426 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:38:46.318625 kubelet[3181]: I0417 00:38:46.317710 3181 scope.go:117] "RemoveContainer" containerID="6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690" Apr 17 00:38:46.469514 kubelet[3181]: E0417 00:38:46.468439 3181 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.982s" Apr 17 00:38:46.543770 containerd[1581]: time="2026-04-17T00:38:46.542931381Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 17 00:38:46.579458 kubelet[3181]: I0417 00:38:46.574864 3181 scope.go:117] "RemoveContainer" containerID="14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542" Apr 17 00:38:46.603889 containerd[1581]: time="2026-04-17T00:38:46.597316238Z" level=info msg="CreateContainer within sandbox \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 17 00:38:46.615565 containerd[1581]: time="2026-04-17T00:38:46.597337702Z" level=error msg="ContainerStatus for \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\": not found" Apr 17 00:38:46.703534 kubelet[3181]: E0417 00:38:46.701469 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\": not found" containerID="14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542" Apr 17 00:38:46.738031 kubelet[3181]: I0417 00:38:46.711874 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542"} err="failed to get container status \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\": rpc error: code = NotFound desc = an error occurred when try to find container \"14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542\": not found" Apr 17 00:38:46.919501 containerd[1581]: time="2026-04-17T00:38:46.918869808Z" level=info msg="Container d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:38:46.971779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248773097.mount: Deactivated successfully. Apr 17 00:38:47.215023 containerd[1581]: time="2026-04-17T00:38:47.206750570Z" level=info msg="Container 8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:38:47.550292 containerd[1581]: time="2026-04-17T00:38:47.544991564Z" level=info msg="CreateContainer within sandbox \"430b80e19fea9fc43f85be5069948beafc3ce264a1ea9887c627a3b1baa86f8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab\"" Apr 17 00:38:47.580804 containerd[1581]: time="2026-04-17T00:38:47.579548863Z" level=info msg="CreateContainer within sandbox \"36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a\"" Apr 17 00:38:47.628330 containerd[1581]: time="2026-04-17T00:38:47.627531153Z" level=info msg="StartContainer for \"d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab\"" Apr 17 00:38:47.641300 containerd[1581]: time="2026-04-17T00:38:47.640817900Z" level=info msg="StartContainer for \"8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a\"" Apr 17 00:38:47.653781 containerd[1581]: time="2026-04-17T00:38:47.646014792Z" level=info msg="connecting to shim d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab" address="unix:///run/containerd/s/86aa2eab26b6677f4e45080413338db26ee4889c5f13e2d0c1dda527286a34b1" protocol=ttrpc version=3 Apr 17 00:38:47.767626 containerd[1581]: time="2026-04-17T00:38:47.762795654Z" level=info msg="connecting to shim 8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a" address="unix:///run/containerd/s/1fd8616eb94714c1343e38db73f43c8dc81fb060bb133b4d75aee2a1063347ae" protocol=ttrpc version=3 Apr 17 00:38:49.379491 systemd[1]: Started cri-containerd-8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a.scope - libcontainer container 8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a. Apr 17 00:38:49.471257 systemd[1]: Started cri-containerd-d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab.scope - libcontainer container d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab. Apr 17 00:38:52.020810 kubelet[3181]: E0417 00:38:52.019818 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:38:52.583692 kubelet[3181]: E0417 00:38:52.570992 3181 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.269s" Apr 17 00:38:59.032782 containerd[1581]: time="2026-04-17T00:38:58.688825199Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 17 00:38:59.968006 containerd[1581]: time="2026-04-17T00:38:59.018972959Z" level=error msg="get state for 36bba164c7148332d3a8777c2d4ad9319ea5cc96f8e2109e2c56c3aa10d005b6" error="context deadline exceeded" Apr 17 00:39:00.307685 containerd[1581]: time="2026-04-17T00:39:00.017801602Z" level=warning msg="unknown status" status=0 Apr 17 00:39:42.852641 kubelet[3181]: E0417 00:39:38.121962 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:39:51.453453 kubelet[3181]: E0417 00:39:39.471764 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:40:07.278006 kubelet[3181]: I0417 00:40:07.277890 3181 scope.go:117] "RemoveContainer" containerID="371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f" Apr 17 00:40:22.773956 containerd[1581]: time="2026-04-17T00:40:19.399619783Z" level=warning msg="container event discarded" container=14ebdc6b318b8801735ab32709e07d0d5c2b50ff1ddd16f45cb08f0a8bbcd542 type=CONTAINER_STOPPED_EVENT Apr 17 00:40:28.148632 kubelet[3181]: E0417 00:40:24.495995 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:40:30.604013 containerd[1581]: time="2026-04-17T00:40:30.380697487Z" level=info msg="StartContainer for \"8b38101fcd7c1361e15e9610443baec5f063659ff48118dffb2d0f5028acce9a\" returns successfully" Apr 17 00:40:42.484988 containerd[1581]: time="2026-04-17T00:40:41.570825059Z" level=info msg="StartContainer for \"d12a789cdc3f09ec95a3c813361ab2e9ad890a6b8b2cde21e5d2b3da6fce00ab\" returns successfully" Apr 17 00:40:44.795875 kubelet[3181]: E0417 00:40:41.914427 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:40:45.039401 containerd[1581]: time="2026-04-17T00:40:44.203588187Z" level=warning msg="container event discarded" container=6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690 type=CONTAINER_CREATED_EVENT Apr 17 00:40:48.710271 containerd[1581]: time="2026-04-17T00:40:48.550960053Z" level=warning msg="container event discarded" container=87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349 type=CONTAINER_STOPPED_EVENT Apr 17 00:40:49.995341 containerd[1581]: time="2026-04-17T00:40:49.946503382Z" level=warning msg="container event discarded" container=0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221 type=CONTAINER_CREATED_EVENT Apr 17 00:40:50.326806 containerd[1581]: time="2026-04-17T00:40:50.261341457Z" level=warning msg="container event discarded" container=6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690 type=CONTAINER_STARTED_EVENT Apr 17 00:40:52.274651 kubelet[3181]: E0417 00:40:52.255288 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:40:52.753345 containerd[1581]: time="2026-04-17T00:40:52.640845050Z" level=warning msg="container event discarded" container=0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221 type=CONTAINER_STARTED_EVENT Apr 17 00:40:57.737162 kubelet[3181]: E0417 00:40:57.437484 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:40:58.874460 kubelet[3181]: E0417 00:40:58.869668 3181 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2m5.818s" Apr 17 00:41:01.146999 containerd[1581]: time="2026-04-17T00:41:01.100944504Z" level=info msg="RemoveContainer for \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\"" Apr 17 00:41:04.231575 kubelet[3181]: E0417 00:41:01.311276 3181 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a6fde27c95c2f3 kube-system 533 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:824fd89300514e351ed3b68d82c665c6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 00:37:32 +0000 UTC,LastTimestamp:2026-04-17 00:38:15.339976174 +0000 UTC m=+119.165588384,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 00:41:11.744340 containerd[1581]: time="2026-04-17T00:41:11.553978022Z" level=warning msg="container event discarded" container=0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221 type=CONTAINER_STOPPED_EVENT Apr 17 00:41:15.381120 containerd[1581]: time="2026-04-17T00:41:14.184984102Z" level=warning msg="container event discarded" container=87ffcc891c8ab5e97b3b81d5bbbb630f573c84a03fa3338178c3fed2017b4349 type=CONTAINER_DELETED_EVENT Apr 17 00:41:16.250850 containerd[1581]: time="2026-04-17T00:41:16.248160067Z" level=warning msg="container event discarded" container=371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f type=CONTAINER_CREATED_EVENT Apr 17 00:41:16.490116 kubelet[3181]: E0417 00:41:16.467240 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:41:19.513958 kubelet[3181]: E0417 00:41:15.058198 3181 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-17T00:40:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-17T00:40:14Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-17T00:40:14Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-17T00:40:14Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.6:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:41:21.939976 containerd[1581]: time="2026-04-17T00:41:21.622921850Z" level=warning msg="container event discarded" container=0ee9bd4c15969416f75b88c1c0134470d61d36693664e7e0b0386f22fa296221 type=CONTAINER_DELETED_EVENT Apr 17 00:41:22.464679 containerd[1581]: time="2026-04-17T00:41:21.960633760Z" level=warning msg="container event discarded" container=371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f type=CONTAINER_STARTED_EVENT Apr 17 00:41:34.593281 kubelet[3181]: E0417 00:41:29.424021 3181 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 00:41:43.047948 kubelet[3181]: I0417 00:41:37.895435 3181 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 17 00:41:55.797305 containerd[1581]: time="2026-04-17T00:41:55.795577847Z" level=info msg="RemoveContainer for \"371fe13fce91bb911e80c414bee87f30dbd3eaae3d1911c578c26f9e9994bd4f\" returns successfully" Apr 17 00:42:03.109240 kubelet[3181]: E0417 00:42:03.108597 3181 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:42:04.519555 kubelet[3181]: E0417 00:42:03.938392 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:42:07.189903 kubelet[3181]: E0417 00:42:07.055524 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 17 00:42:08.783128 kubelet[3181]: E0417 00:42:08.683735 3181 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 00:42:26.390665 kubelet[3181]: E0417 00:42:26.248362 3181 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 17 00:42:37.669023 kubelet[3181]: E0417 00:42:35.844263 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 17 00:43:17.349381 kubelet[3181]: E0417 00:43:14.282110 3181 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.6:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:43:19.170421 kubelet[3181]: E0417 00:43:08.319441 3181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 00:43:40.820682 containerd[1581]: time="2026-04-17T00:43:40.089300278Z" level=warning msg="container event discarded" container=6568876d250f6a69b263bbdc68f8e0ba791a5d046a5d96530c81a598a6a02690 type=CONTAINER_STOPPED_EVENT