Apr 28 00:19:17.409133 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 00:19:17.409153 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:19:17.409163 kernel: BIOS-provided physical RAM map: Apr 28 00:19:17.409169 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 00:19:17.409174 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 00:19:17.409179 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 00:19:17.409185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 00:19:17.409191 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 00:19:17.409196 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 00:19:17.409203 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 00:19:17.409208 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:19:17.409213 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 00:19:17.409230 kernel: NX (Execute Disable) protection: active Apr 28 00:19:17.409236 kernel: APIC: Static calls initialized Apr 28 00:19:17.409242 kernel: SMBIOS 2.8 present. Apr 28 00:19:17.409258 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 00:19:17.409265 kernel: Hypervisor detected: KVM Apr 28 00:19:17.409270 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 00:19:17.409276 kernel: kvm-clock: using sched offset of 9560423468 cycles Apr 28 00:19:17.409282 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 00:19:17.409288 kernel: tsc: Detected 2793.438 MHz processor Apr 28 00:19:17.409294 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 00:19:17.409300 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 00:19:17.409328 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 00:19:17.409337 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 00:19:17.409342 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 00:19:17.409348 kernel: Using GB pages for direct mapping Apr 28 00:19:17.409354 kernel: ACPI: Early table checksum verification disabled Apr 28 00:19:17.409360 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 00:19:17.409366 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409371 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409378 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409383 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 00:19:17.409391 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409397 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409403 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409409 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:19:17.409415 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 00:19:17.409420 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 00:19:17.409426 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 00:19:17.409435 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 00:19:17.409442 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 00:19:17.409449 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 00:19:17.409455 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 00:19:17.409460 kernel: No NUMA configuration found Apr 28 00:19:17.409466 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 00:19:17.409472 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 00:19:17.409480 kernel: Zone ranges: Apr 28 00:19:17.409486 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 00:19:17.409492 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 00:19:17.409498 kernel: Normal empty Apr 28 00:19:17.409504 kernel: Movable zone start for each node Apr 28 00:19:17.409510 kernel: Early memory node ranges Apr 28 00:19:17.409516 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 00:19:17.409522 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 00:19:17.409528 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 00:19:17.409536 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:19:17.409542 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 00:19:17.409557 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 00:19:17.409563 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 00:19:17.409569 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 00:19:17.409575 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 00:19:17.409581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 00:19:17.409587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 00:19:17.409593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 00:19:17.409601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 00:19:17.409607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 00:19:17.409613 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 00:19:17.409619 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 00:19:17.409625 kernel: TSC deadline timer available Apr 28 00:19:17.409630 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 00:19:17.409635 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 00:19:17.409640 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 00:19:17.409646 kernel: kvm-guest: setup PV sched yield Apr 28 00:19:17.409659 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 00:19:17.409666 kernel: Booting paravirtualized kernel on KVM Apr 28 00:19:17.409671 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 00:19:17.409677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 00:19:17.409682 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 00:19:17.409687 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 00:19:17.409691 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 00:19:17.409696 kernel: kvm-guest: PV spinlocks enabled Apr 28 00:19:17.409701 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 00:19:17.409707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:19:17.409714 kernel: random: crng init done Apr 28 00:19:17.409719 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:19:17.409724 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:19:17.409729 kernel: Fallback order for Node 0: 0 Apr 28 00:19:17.409734 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 00:19:17.409739 kernel: Policy zone: DMA32 Apr 28 00:19:17.409744 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:19:17.409750 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137896K reserved, 0K cma-reserved) Apr 28 00:19:17.409756 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 00:19:17.409761 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 00:19:17.409766 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 00:19:17.409771 kernel: Dynamic Preempt: voluntary Apr 28 00:19:17.409776 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:19:17.409782 kernel: rcu: RCU event tracing is enabled. Apr 28 00:19:17.409787 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 00:19:17.409792 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:19:17.409797 kernel: Rude variant of Tasks RCU enabled. Apr 28 00:19:17.409804 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:19:17.409809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:19:17.409814 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 00:19:17.410109 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 00:19:17.410187 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:19:17.410193 kernel: Console: colour VGA+ 80x25 Apr 28 00:19:17.410198 kernel: printk: console [ttyS0] enabled Apr 28 00:19:17.410203 kernel: ACPI: Core revision 20230628 Apr 28 00:19:17.410209 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 00:19:17.410218 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 00:19:17.410224 kernel: x2apic enabled Apr 28 00:19:17.410229 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 00:19:17.410234 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 00:19:17.410239 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 00:19:17.410244 kernel: kvm-guest: setup PV IPIs Apr 28 00:19:17.410249 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 00:19:17.410255 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:19:17.410268 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 00:19:17.410274 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 00:19:17.410279 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 00:19:17.410285 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 00:19:17.410293 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 00:19:17.410298 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 00:19:17.410325 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 00:19:17.410332 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 00:19:17.410340 kernel: RETBleed: Vulnerable Apr 28 00:19:17.410346 kernel: Speculative Store Bypass: Vulnerable Apr 28 00:19:17.410352 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 00:19:17.410366 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 00:19:17.410372 kernel: active return thunk: its_return_thunk Apr 28 00:19:17.410377 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 00:19:17.410383 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 00:19:17.410389 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 00:19:17.410394 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 00:19:17.410402 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 00:19:17.410407 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 00:19:17.410413 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 00:19:17.410419 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 00:19:17.410424 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 00:19:17.410430 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 00:19:17.410436 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 00:19:17.410441 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 00:19:17.410447 kernel: Freeing SMP alternatives memory: 32K Apr 28 00:19:17.410455 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:19:17.410461 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 00:19:17.410466 kernel: landlock: Up and running. Apr 28 00:19:17.410472 kernel: SELinux: Initializing. Apr 28 00:19:17.410477 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:19:17.410483 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:19:17.410489 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 00:19:17.410502 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:19:17.410508 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:19:17.410516 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:19:17.410522 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 00:19:17.410528 kernel: signal: max sigframe size: 3632 Apr 28 00:19:17.410533 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:19:17.410539 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:19:17.410545 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 00:19:17.410550 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:19:17.410556 kernel: smpboot: x86: Booting SMP configuration: Apr 28 00:19:17.410561 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 00:19:17.410569 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 00:19:17.410574 kernel: smpboot: Max logical packages: 1 Apr 28 00:19:17.410580 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 00:19:17.410585 kernel: devtmpfs: initialized Apr 28 00:19:17.410591 kernel: x86/mm: Memory block size: 128MB Apr 28 00:19:17.410596 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:19:17.410602 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 00:19:17.410608 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:19:17.410613 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:19:17.410620 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:19:17.410626 kernel: audit: type=2000 audit(1777335555.146:1): state=initialized audit_enabled=0 res=1 Apr 28 00:19:17.410632 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:19:17.410637 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 00:19:17.410643 kernel: cpuidle: using governor menu Apr 28 00:19:17.410648 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:19:17.410654 kernel: dca service started, version 1.12.1 Apr 28 00:19:17.410659 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 00:19:17.410665 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 00:19:17.410672 kernel: PCI: Using configuration type 1 for base access Apr 28 00:19:17.410678 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 00:19:17.410683 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:19:17.410689 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:19:17.410695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:19:17.410700 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:19:17.410706 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:19:17.410711 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:19:17.410717 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:19:17.410724 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:19:17.410730 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 00:19:17.410735 kernel: ACPI: Interpreter enabled Apr 28 00:19:17.410741 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 00:19:17.410746 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 00:19:17.410752 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 00:19:17.410758 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 00:19:17.410763 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 00:19:17.410769 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:19:17.411362 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:19:17.411445 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 00:19:17.411507 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 00:19:17.411515 kernel: PCI host bridge to bus 0000:00 Apr 28 00:19:17.411639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 00:19:17.411696 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 00:19:17.411756 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 00:19:17.411810 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 00:19:17.411908 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 00:19:17.411963 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 00:19:17.412018 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:19:17.412137 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 00:19:17.412279 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 00:19:17.412395 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 00:19:17.412459 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 00:19:17.412520 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 00:19:17.412581 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 00:19:17.413841 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 00:19:17.413935 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 00:19:17.414020 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 00:19:17.414117 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 00:19:17.414707 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 00:19:17.414778 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 00:19:17.414876 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 00:19:17.414967 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 00:19:17.416300 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 00:19:17.416421 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 00:19:17.416483 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 00:19:17.416544 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 00:19:17.416605 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 00:19:17.416720 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 00:19:17.416783 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 00:19:17.417431 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 00:19:17.417529 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 00:19:17.417590 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 00:19:17.417710 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 00:19:17.417775 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 00:19:17.417783 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 00:19:17.417789 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 00:19:17.417796 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 00:19:17.417802 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 00:19:17.417815 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 00:19:17.417842 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 00:19:17.417851 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 00:19:17.417858 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 00:19:17.417866 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 00:19:17.417874 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 00:19:17.417883 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 00:19:17.417892 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 00:19:17.417901 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 00:19:17.417912 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 00:19:17.417921 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 00:19:17.417930 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 00:19:17.417938 kernel: iommu: Default domain type: Translated Apr 28 00:19:17.417944 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 00:19:17.417950 kernel: PCI: Using ACPI for IRQ routing Apr 28 00:19:17.417956 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 00:19:17.417962 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 00:19:17.417968 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 00:19:17.418043 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 00:19:17.418104 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 00:19:17.418163 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 00:19:17.418171 kernel: vgaarb: loaded Apr 28 00:19:17.418177 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 00:19:17.418183 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 00:19:17.418189 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 00:19:17.418194 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:19:17.418202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:19:17.418208 kernel: pnp: PnP ACPI init Apr 28 00:19:17.420852 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 00:19:17.420875 kernel: pnp: PnP ACPI: found 6 devices Apr 28 00:19:17.420886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 00:19:17.420895 kernel: NET: Registered PF_INET protocol family Apr 28 00:19:17.420904 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:19:17.420914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:19:17.420930 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:19:17.420938 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:19:17.420948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:19:17.420959 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:19:17.420968 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:19:17.420978 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:19:17.420987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:19:17.420997 kernel: NET: Registered PF_XDP protocol family Apr 28 00:19:17.421112 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 00:19:17.421176 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 00:19:17.421230 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 00:19:17.421284 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 00:19:17.421371 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 00:19:17.421427 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 00:19:17.421434 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:19:17.421441 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 00:19:17.421447 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:19:17.421456 kernel: Initialise system trusted keyrings Apr 28 00:19:17.421462 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:19:17.421468 kernel: Key type asymmetric registered Apr 28 00:19:17.421474 kernel: Asymmetric key parser 'x509' registered Apr 28 00:19:17.421479 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 00:19:17.421485 kernel: io scheduler mq-deadline registered Apr 28 00:19:17.421491 kernel: io scheduler kyber registered Apr 28 00:19:17.421497 kernel: io scheduler bfq registered Apr 28 00:19:17.421502 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 00:19:17.421511 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 00:19:17.421517 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 00:19:17.421523 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 00:19:17.421528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:19:17.421534 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 00:19:17.421540 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 00:19:17.421546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 00:19:17.421551 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 00:19:17.421703 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 00:19:17.421714 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 00:19:17.421772 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 00:19:17.421863 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T00:19:16 UTC (1777335556) Apr 28 00:19:17.421959 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 00:19:17.421968 kernel: intel_pstate: CPU model not supported Apr 28 00:19:17.421974 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:19:17.421979 kernel: Segment Routing with IPv6 Apr 28 00:19:17.421985 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:19:17.421998 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:19:17.422007 kernel: Key type dns_resolver registered Apr 28 00:19:17.422015 kernel: IPI shorthand broadcast: enabled Apr 28 00:19:17.422023 kernel: sched_clock: Marking stable (1751016327, 335808095)->(2286489855, -199665433) Apr 28 00:19:17.422031 kernel: registered taskstats version 1 Apr 28 00:19:17.422040 kernel: Loading compiled-in X.509 certificates Apr 28 00:19:17.422049 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 00:19:17.422058 kernel: Key type .fscrypt registered Apr 28 00:19:17.422066 kernel: Key type fscrypt-provisioning registered Apr 28 00:19:17.422077 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:19:17.422086 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:19:17.422096 kernel: ima: No architecture policies found Apr 28 00:19:17.422106 kernel: clk: Disabling unused clocks Apr 28 00:19:17.422116 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 00:19:17.422127 kernel: Write protecting the kernel read-only data: 36864k Apr 28 00:19:17.422137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 00:19:17.422147 kernel: Run /init as init process Apr 28 00:19:17.422156 kernel: with arguments: Apr 28 00:19:17.422166 kernel: /init Apr 28 00:19:17.422177 kernel: with environment: Apr 28 00:19:17.422187 kernel: HOME=/ Apr 28 00:19:17.422196 kernel: TERM=linux Apr 28 00:19:17.422207 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:19:17.422218 systemd[1]: Detected virtualization kvm. Apr 28 00:19:17.422229 systemd[1]: Detected architecture x86-64. Apr 28 00:19:17.422238 systemd[1]: Running in initrd. Apr 28 00:19:17.422250 systemd[1]: No hostname configured, using default hostname. Apr 28 00:19:17.422260 systemd[1]: Hostname set to . Apr 28 00:19:17.422271 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:19:17.422281 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:19:17.422289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:19:17.422295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:19:17.422302 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:19:17.422688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:19:17.422717 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:19:17.422723 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:19:17.422742 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 00:19:17.422749 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 00:19:17.422755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:19:17.422763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:19:17.422770 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:19:17.422776 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:19:17.422782 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:19:17.422788 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:19:17.422795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:19:17.422801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:19:17.422811 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:19:17.422843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:19:17.422852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:19:17.422861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:19:17.422869 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:19:17.422878 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:19:17.422902 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:19:17.422913 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:19:17.422923 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:19:17.422946 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:19:17.422955 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:19:17.422971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:19:17.422978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:19:17.422984 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:19:17.423000 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:19:17.423031 systemd-journald[194]: Collecting audit messages is disabled. Apr 28 00:19:17.423076 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:19:17.423109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:19:17.423125 systemd-journald[194]: Journal started Apr 28 00:19:17.423156 systemd-journald[194]: Runtime Journal (/run/log/journal/c6ba4616a75645c7bc8a6dcdf2df3289) is 6.0M, max 48.4M, 42.3M free. Apr 28 00:19:17.426498 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:19:17.428011 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 00:19:17.428627 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:19:17.476578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:19:17.609044 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:19:17.617712 kernel: Bridge firewalling registered Apr 28 00:19:17.494647 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 00:19:17.628940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:19:17.661879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:19:17.667963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:17.705034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:19:17.718596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:19:17.721969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:19:17.733206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:19:17.761788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:19:17.763327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:19:17.769772 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:19:17.772491 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:19:17.806219 dracut-cmdline[231]: dracut-dracut-053 Apr 28 00:19:17.807232 systemd-resolved[225]: Positive Trust Anchors: Apr 28 00:19:17.807239 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:19:17.807266 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:19:17.824226 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:19:17.809484 systemd-resolved[225]: Defaulting to hostname 'linux'. Apr 28 00:19:17.810679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:19:17.814594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:19:18.045698 kernel: SCSI subsystem initialized Apr 28 00:19:18.055366 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:19:18.077507 kernel: iscsi: registered transport (tcp) Apr 28 00:19:18.112901 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:19:18.113116 kernel: QLogic iSCSI HBA Driver Apr 28 00:19:18.206660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:19:18.228534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:19:18.381777 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:19:18.382051 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:19:18.382061 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 00:19:18.587725 kernel: raid6: avx512x4 gen() 32362 MB/s Apr 28 00:19:18.606898 kernel: raid6: avx512x2 gen() 28855 MB/s Apr 28 00:19:18.623202 kernel: raid6: avx512x1 gen() 31596 MB/s Apr 28 00:19:18.640693 kernel: raid6: avx2x4 gen() 30394 MB/s Apr 28 00:19:18.658996 kernel: raid6: avx2x2 gen() 29488 MB/s Apr 28 00:19:18.677278 kernel: raid6: avx2x1 gen() 20343 MB/s Apr 28 00:19:18.677620 kernel: raid6: using algorithm avx512x4 gen() 32362 MB/s Apr 28 00:19:18.699244 kernel: raid6: .... xor() 8378 MB/s, rmw enabled Apr 28 00:19:18.699688 kernel: raid6: using avx512x2 recovery algorithm Apr 28 00:19:18.814579 kernel: hrtimer: interrupt took 27684381 ns Apr 28 00:19:18.831693 kernel: xor: automatically using best checksumming function avx Apr 28 00:19:19.172600 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:19:19.193800 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:19:19.218802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:19:19.342049 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 28 00:19:19.353831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:19:19.377891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:19:19.461054 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Apr 28 00:19:19.596364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:19:19.628581 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:19:19.844487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:19:19.863541 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:19:19.879536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:19:19.886424 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:19:19.890189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:19:19.903196 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:19:19.919976 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:19:19.923449 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 00:19:19.997474 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 00:19:20.004934 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 00:19:20.035046 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:19:20.035152 kernel: GPT:9289727 != 19775487 Apr 28 00:19:20.035168 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:19:20.035182 kernel: GPT:9289727 != 19775487 Apr 28 00:19:20.035196 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:19:20.035209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:19:20.031450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:19:20.033035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:19:20.043477 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:19:20.046865 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:19:20.047001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:20.051881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:19:20.066881 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 00:19:20.066980 kernel: libata version 3.00 loaded. Apr 28 00:19:20.066995 kernel: AES CTR mode by8 optimization enabled Apr 28 00:19:20.068612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:19:20.072779 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:19:20.083950 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 00:19:20.084155 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 00:19:20.088242 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 00:19:20.088523 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 00:19:20.093530 kernel: scsi host0: ahci Apr 28 00:19:20.093719 kernel: scsi host1: ahci Apr 28 00:19:20.095351 kernel: scsi host2: ahci Apr 28 00:19:20.097360 kernel: scsi host3: ahci Apr 28 00:19:20.101450 kernel: scsi host4: ahci Apr 28 00:19:20.259435 kernel: scsi host5: ahci Apr 28 00:19:20.267923 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 00:19:20.267989 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 00:19:20.268002 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 00:19:20.269595 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 00:19:20.274918 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 00:19:20.275061 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 00:19:20.275075 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (457) Apr 28 00:19:20.285361 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Apr 28 00:19:20.285452 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 00:19:20.494040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:20.506831 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 00:19:20.507146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 00:19:20.529034 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 00:19:20.551607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:19:20.568466 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:19:20.580131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:19:20.598674 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 00:19:20.601039 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 00:19:20.601063 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 00:19:20.605117 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 00:19:20.605786 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 00:19:20.608584 kernel: ata3.00: applying bridge limits Apr 28 00:19:20.612775 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 00:19:20.612928 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 00:19:20.615506 kernel: ata3.00: configured for UDMA/100 Apr 28 00:19:20.619335 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:19:20.685086 disk-uuid[554]: Primary Header is updated. Apr 28 00:19:20.685086 disk-uuid[554]: Secondary Entries is updated. Apr 28 00:19:20.685086 disk-uuid[554]: Secondary Header is updated. Apr 28 00:19:20.696352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:19:20.709416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:19:20.709528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:19:20.770688 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 00:19:20.771074 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:19:20.783515 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:19:21.715837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:19:21.717388 disk-uuid[561]: The operation has completed successfully. Apr 28 00:19:21.760042 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:19:21.760143 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:19:21.813104 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 00:19:21.822431 sh[590]: Success Apr 28 00:19:21.862481 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 00:19:22.016753 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:19:22.043964 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 00:19:22.054561 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 00:19:22.085354 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 00:19:22.085413 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:19:22.087707 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 00:19:22.087730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 00:19:22.088988 kernel: BTRFS info (device dm-0): using free space tree Apr 28 00:19:22.126715 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 00:19:22.188693 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:19:22.206785 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:19:22.217538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:19:22.240742 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:19:22.240994 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:19:22.241008 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:19:22.246919 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:19:22.266751 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 00:19:22.272205 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:19:22.286119 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:19:22.295779 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:19:22.551028 ignition[671]: Ignition 2.19.0 Apr 28 00:19:22.551050 ignition[671]: Stage: fetch-offline Apr 28 00:19:22.551087 ignition[671]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:22.551094 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:22.551249 ignition[671]: parsed url from cmdline: "" Apr 28 00:19:22.551252 ignition[671]: no config URL provided Apr 28 00:19:22.551256 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:19:22.551264 ignition[671]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:19:22.551301 ignition[671]: op(1): [started] loading QEMU firmware config module Apr 28 00:19:22.551340 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 00:19:22.568531 ignition[671]: op(1): [finished] loading QEMU firmware config module Apr 28 00:19:22.595976 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:19:22.608432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:19:22.709515 systemd-networkd[779]: lo: Link UP Apr 28 00:19:22.709533 systemd-networkd[779]: lo: Gained carrier Apr 28 00:19:22.711217 systemd-networkd[779]: Enumeration completed Apr 28 00:19:22.717416 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:19:22.717968 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:19:22.717972 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:19:22.723791 systemd-networkd[779]: eth0: Link UP Apr 28 00:19:22.793923 ignition[671]: parsing config with SHA512: b788828ceca3b3ba59632c2d7e05d860947cf4af0b1657d4843c30f31013882147740438ae4b004c3a96c1442ad3c5a616665ab5eac8a7d82c3caef5af3c0778 Apr 28 00:19:22.723796 systemd-networkd[779]: eth0: Gained carrier Apr 28 00:19:22.723808 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:19:22.729798 systemd[1]: Reached target network.target - Network. Apr 28 00:19:22.803620 ignition[671]: fetch-offline: fetch-offline passed Apr 28 00:19:22.794011 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:19:22.803729 ignition[671]: Ignition finished successfully Apr 28 00:19:22.802774 unknown[671]: fetched base config from "system" Apr 28 00:19:22.802785 unknown[671]: fetched user config from "qemu" Apr 28 00:19:22.805518 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:19:22.810627 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 00:19:22.826538 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:19:22.924957 ignition[784]: Ignition 2.19.0 Apr 28 00:19:22.924976 ignition[784]: Stage: kargs Apr 28 00:19:22.925181 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:22.925191 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:22.931429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:19:22.926473 ignition[784]: kargs: kargs passed Apr 28 00:19:22.926530 ignition[784]: Ignition finished successfully Apr 28 00:19:22.942675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:19:23.022842 ignition[792]: Ignition 2.19.0 Apr 28 00:19:23.022879 ignition[792]: Stage: disks Apr 28 00:19:23.023089 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:23.023101 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:23.024073 ignition[792]: disks: disks passed Apr 28 00:19:23.024117 ignition[792]: Ignition finished successfully Apr 28 00:19:23.080192 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:19:23.083688 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:19:23.087678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:19:23.092164 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:19:23.099853 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:19:23.103648 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:19:23.127380 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:19:23.154420 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 00:19:23.167693 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:19:23.196233 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:19:23.401047 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 00:19:23.406025 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:19:23.408088 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:19:23.438740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:19:23.441105 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:19:23.444956 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 00:19:23.445043 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:19:23.445123 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:19:23.462246 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:19:23.467619 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Apr 28 00:19:23.472049 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:19:23.472669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:19:23.472699 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:19:23.479075 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:19:23.482226 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:19:23.493181 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:19:23.581904 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 00:19:23.588163 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 28 00:19:23.606608 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 00:19:23.679673 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 00:19:24.080526 systemd-networkd[779]: eth0: Gained IPv6LL Apr 28 00:19:24.267038 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:19:24.297105 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:19:24.327756 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:19:24.378986 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:19:24.381665 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:19:24.429270 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:19:24.475659 ignition[924]: INFO : Ignition 2.19.0 Apr 28 00:19:24.475659 ignition[924]: INFO : Stage: mount Apr 28 00:19:24.480703 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:24.480703 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:24.480703 ignition[924]: INFO : mount: mount passed Apr 28 00:19:24.480703 ignition[924]: INFO : Ignition finished successfully Apr 28 00:19:24.481672 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:19:24.500842 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:19:24.595709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:19:24.623699 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Apr 28 00:19:24.623893 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:19:24.627737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:19:24.629193 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:19:24.634669 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:19:24.637781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:19:24.704618 ignition[954]: INFO : Ignition 2.19.0 Apr 28 00:19:24.704618 ignition[954]: INFO : Stage: files Apr 28 00:19:24.710425 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:24.710425 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:24.718054 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:19:24.725183 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:19:24.725183 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:19:24.811694 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:19:24.815564 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:19:24.815564 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:19:24.814761 unknown[954]: wrote ssh authorized keys file for user: core Apr 28 00:19:24.822435 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:19:24.822435 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 00:19:24.923220 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:19:25.171237 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:19:25.171237 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 00:19:25.171237 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 28 00:19:25.727789 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 00:19:26.095460 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 00:19:26.095460 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:19:26.105399 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:19:26.105399 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:19:26.111611 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:19:26.116808 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:19:26.122426 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:19:26.122426 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:19:26.134121 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 00:19:26.393981 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 28 00:19:28.987087 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:19:28.987087 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 28 00:19:29.006739 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 28 00:19:29.013612 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 00:19:29.329224 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:19:29.345708 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:19:29.352710 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 00:19:29.352710 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:19:29.361469 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:19:29.365053 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:19:29.368150 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:19:29.368150 ignition[954]: INFO : files: files passed Apr 28 00:19:29.376278 ignition[954]: INFO : Ignition finished successfully Apr 28 00:19:29.381625 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:19:29.433569 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:19:29.487518 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:19:29.512500 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:19:29.512691 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:19:29.566037 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 00:19:29.583429 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:19:29.583429 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:19:29.599174 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:19:29.605524 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:19:29.605830 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:19:29.637958 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:19:29.793399 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:19:29.793648 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:19:29.803021 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:19:29.805480 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:19:29.811173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:19:29.832187 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:19:30.045761 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:19:30.073893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:19:30.128282 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:19:30.132465 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:19:30.145125 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:19:30.150727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:19:30.151075 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:19:30.159834 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:19:30.162987 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:19:30.168152 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:19:30.173645 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:19:30.177494 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:19:30.183559 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:19:30.185982 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:19:30.191902 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:19:30.195084 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:19:30.201119 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:19:30.201286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:19:30.201538 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:19:30.206113 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:19:30.209653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:19:30.215077 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:19:30.216840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:19:30.223180 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:19:30.223652 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:19:30.227811 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:19:30.227953 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:19:30.231929 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:19:30.239669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:19:30.244629 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:19:30.245789 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:19:30.250265 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:19:30.254800 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:19:30.255020 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:19:30.256827 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:19:30.256900 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:19:30.259188 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:19:30.259433 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:19:30.265188 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:19:30.265476 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:19:30.302224 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:19:30.303688 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:19:30.303853 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:19:30.307826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:19:30.312338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:19:30.312474 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:19:30.314789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:19:30.314934 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:19:30.318256 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:19:30.318400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:19:30.389285 ignition[1008]: INFO : Ignition 2.19.0 Apr 28 00:19:30.389285 ignition[1008]: INFO : Stage: umount Apr 28 00:19:30.396663 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:30.396663 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:30.396663 ignition[1008]: INFO : umount: umount passed Apr 28 00:19:30.396663 ignition[1008]: INFO : Ignition finished successfully Apr 28 00:19:30.394989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:19:30.396701 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:19:30.396798 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:19:30.402767 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:19:30.402855 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:19:30.407218 systemd[1]: Stopped target network.target - Network. Apr 28 00:19:30.411185 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:19:30.411288 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:19:30.414986 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:19:30.415064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:19:30.418248 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:19:30.418341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:19:30.420837 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:19:30.420877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:19:30.425301 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:19:30.425452 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:19:30.437578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:19:30.439387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:19:30.448161 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 28 00:19:30.451711 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:19:30.451876 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:19:30.454718 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:19:30.454748 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:19:30.462940 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:19:30.466823 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:19:30.467012 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:19:30.472413 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:19:30.479591 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:19:30.479778 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:19:30.519432 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:19:30.519648 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:19:30.532891 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:19:30.533153 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:19:30.543119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:19:30.543280 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:19:30.543441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 00:19:30.543468 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:19:30.554749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:19:30.554873 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:19:30.561520 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:19:30.561602 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:19:30.567160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:19:30.567273 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:19:30.605702 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:19:30.617630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:19:30.618053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:19:30.623585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:19:30.623797 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:19:30.627208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:19:30.628196 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:19:30.685660 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:19:30.685819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:19:30.693934 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:19:30.696839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:30.725578 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:19:30.725726 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:19:30.734856 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:19:30.750036 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:19:30.782110 systemd[1]: Switching root. Apr 28 00:19:30.916580 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 28 00:19:30.916693 systemd-journald[194]: Journal stopped Apr 28 00:19:33.682221 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:19:33.682301 kernel: SELinux: policy capability open_perms=1 Apr 28 00:19:33.682343 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:19:33.682353 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:19:33.682363 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:19:33.682372 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:19:33.682383 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:19:33.682393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:19:33.682404 kernel: audit: type=1403 audit(1777335571.333:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 00:19:33.682419 systemd[1]: Successfully loaded SELinux policy in 147.103ms. Apr 28 00:19:33.682451 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 72.311ms. Apr 28 00:19:33.682473 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:19:33.682485 systemd[1]: Detected virtualization kvm. Apr 28 00:19:33.682495 systemd[1]: Detected architecture x86-64. Apr 28 00:19:33.682506 systemd[1]: Detected first boot. Apr 28 00:19:33.682534 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:19:33.682545 zram_generator::config[1053]: No configuration found. Apr 28 00:19:33.682557 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:19:33.682570 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:19:33.682580 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:19:33.682591 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:19:33.682603 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:19:33.682613 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:19:33.682623 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:19:33.682634 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:19:33.682644 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:19:33.682655 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:19:33.682668 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:19:33.682677 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:19:33.682686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:19:33.682694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:19:33.682702 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:19:33.682711 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:19:33.682719 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:19:33.682727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:19:33.682735 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 00:19:33.682756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:19:33.682765 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:19:33.682773 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:19:33.682781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:19:33.682790 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:19:33.682798 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:19:33.682806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:19:33.682816 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:19:33.682824 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:19:33.682832 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:19:33.682840 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:19:33.682848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:19:33.682857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:19:33.682865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:19:33.682873 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:19:33.682881 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:19:33.682889 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:19:33.682899 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:19:33.682909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:33.682921 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:19:33.682942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:19:33.682975 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:19:33.682984 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:19:33.682992 systemd[1]: Reached target machines.target - Containers. Apr 28 00:19:33.683000 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:19:33.683011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:19:33.683019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:19:33.683027 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:19:33.683035 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:19:33.683043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:19:33.683051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:19:33.683059 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:19:33.683068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:19:33.683078 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:19:33.683086 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:19:33.683095 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:19:33.683102 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:19:33.683111 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:19:33.683119 kernel: loop: module loaded Apr 28 00:19:33.683126 kernel: fuse: init (API version 7.39) Apr 28 00:19:33.683134 kernel: ACPI: bus type drm_connector registered Apr 28 00:19:33.683141 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:19:33.683162 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:19:33.683178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 00:19:33.683187 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:19:33.683194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:19:33.683203 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 00:19:33.683211 systemd[1]: Stopped verity-setup.service. Apr 28 00:19:33.683238 systemd-journald[1130]: Collecting audit messages is disabled. Apr 28 00:19:33.683260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:33.683268 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:19:33.683277 systemd-journald[1130]: Journal started Apr 28 00:19:33.683294 systemd-journald[1130]: Runtime Journal (/run/log/journal/c6ba4616a75645c7bc8a6dcdf2df3289) is 6.0M, max 48.4M, 42.3M free. Apr 28 00:19:33.223047 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:19:33.253971 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 00:19:33.254445 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:19:33.687360 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:19:33.689781 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:19:33.691569 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:19:33.693103 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:19:33.695102 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:19:33.697366 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:19:33.703022 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:19:33.705066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:19:33.708799 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:19:33.708973 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:19:33.712907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:19:33.713060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:19:33.718188 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:19:33.718379 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:19:33.720979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:19:33.721105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:19:33.723637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:19:33.723776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:19:33.725585 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:19:33.725703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:19:33.727442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:19:33.729729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 00:19:33.732134 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:19:33.747510 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:19:33.756575 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:19:33.767648 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:19:33.771696 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:19:33.771749 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:19:33.780562 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 00:19:33.802122 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:19:33.807149 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:19:33.808688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:19:33.810506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:19:33.814438 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:19:33.816203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:19:33.817609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:19:33.819242 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:19:33.821506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:19:33.824741 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:19:33.827655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:19:33.834414 systemd-journald[1130]: Time spent on flushing to /var/log/journal/c6ba4616a75645c7bc8a6dcdf2df3289 is 60.324ms for 955 entries. Apr 28 00:19:33.834414 systemd-journald[1130]: System Journal (/var/log/journal/c6ba4616a75645c7bc8a6dcdf2df3289) is 8.0M, max 195.6M, 187.6M free. Apr 28 00:19:33.993844 systemd-journald[1130]: Received client request to flush runtime journal. Apr 28 00:19:33.993896 kernel: loop0: detected capacity change from 0 to 142488 Apr 28 00:19:33.993915 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:19:33.831733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:19:33.833586 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:19:33.833741 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:19:33.840198 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:19:33.853584 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 00:19:33.878916 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:19:33.887491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:19:33.895205 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:19:33.914676 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 00:19:33.917573 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 00:19:33.995538 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:19:34.008626 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:19:34.009240 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:19:34.012788 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 00:19:34.025969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:19:34.029331 kernel: loop1: detected capacity change from 0 to 140768 Apr 28 00:19:34.054229 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 28 00:19:34.054565 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 28 00:19:34.060102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:19:34.081400 kernel: loop2: detected capacity change from 0 to 228704 Apr 28 00:19:34.168409 kernel: loop3: detected capacity change from 0 to 142488 Apr 28 00:19:34.188349 kernel: loop4: detected capacity change from 0 to 140768 Apr 28 00:19:34.204345 kernel: loop5: detected capacity change from 0 to 228704 Apr 28 00:19:34.216746 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 00:19:34.217261 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 28 00:19:34.220436 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 00:19:34.220456 systemd[1]: Reloading... Apr 28 00:19:34.437371 zram_generator::config[1222]: No configuration found. Apr 28 00:19:34.646341 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:19:34.666839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:19:34.708811 systemd[1]: Reloading finished in 487 ms. Apr 28 00:19:34.796515 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:19:34.799248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:19:34.924452 systemd[1]: Starting ensure-sysext.service... Apr 28 00:19:34.930693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:19:34.951574 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 28 00:19:34.951832 systemd[1]: Reloading... Apr 28 00:19:35.128133 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:19:35.138864 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:19:35.140666 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:19:35.140862 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 28 00:19:35.140903 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 28 00:19:35.146397 zram_generator::config[1284]: No configuration found. Apr 28 00:19:35.148888 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:19:35.148906 systemd-tmpfiles[1259]: Skipping /boot Apr 28 00:19:35.174786 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:19:35.179294 systemd-tmpfiles[1259]: Skipping /boot Apr 28 00:19:35.486984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:19:35.595038 systemd[1]: Reloading finished in 642 ms. Apr 28 00:19:35.643172 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:19:35.696730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:19:35.797064 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:19:35.850992 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:19:35.855343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:19:35.861921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:19:35.871405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:19:35.877521 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:19:35.889893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:35.896530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:19:35.909466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:19:35.929038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:19:36.007734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:19:36.010212 augenrules[1348]: No rules Apr 28 00:19:36.010235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:19:36.020844 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Apr 28 00:19:36.022051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:19:36.026508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:36.029335 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:19:36.032769 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:19:36.034981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:19:36.035188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:19:36.047786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:19:36.047927 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:19:36.052581 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:19:36.052720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:19:36.073295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:19:36.080020 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:19:36.091934 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:19:36.169764 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:36.169916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:19:36.226099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:19:36.305387 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:19:36.308213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:19:36.311102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:19:36.317815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:19:36.335165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:19:36.356642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:19:36.359886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:19:36.360081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:36.361060 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:19:36.363747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:19:36.363921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:19:36.368511 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:19:36.368648 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:19:36.370569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:19:36.370702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:19:36.372736 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:19:36.372866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:19:36.380181 systemd[1]: Finished ensure-sysext.service. Apr 28 00:19:36.392672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:19:36.392743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:19:36.395559 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:19:36.489419 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:19:36.496552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 00:19:36.627336 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1364) Apr 28 00:19:36.626871 systemd-networkd[1388]: lo: Link UP Apr 28 00:19:36.626874 systemd-networkd[1388]: lo: Gained carrier Apr 28 00:19:36.626906 systemd-resolved[1329]: Positive Trust Anchors: Apr 28 00:19:36.626934 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:19:36.626987 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:19:36.627523 systemd-networkd[1388]: Enumeration completed Apr 28 00:19:36.627675 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:19:36.689613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:19:36.699557 systemd-resolved[1329]: Defaulting to hostname 'linux'. Apr 28 00:19:36.706744 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:19:36.709582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:19:36.718055 systemd[1]: Reached target network.target - Network. Apr 28 00:19:36.719566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:19:36.721338 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:19:36.740689 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:19:36.740706 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:19:36.743869 systemd-networkd[1388]: eth0: Link UP Apr 28 00:19:36.743878 systemd-networkd[1388]: eth0: Gained carrier Apr 28 00:19:36.743897 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:19:36.761778 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:19:36.768586 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Apr 28 00:19:37.346569 systemd-resolved[1329]: Clock change detected. Flushing caches. Apr 28 00:19:37.347069 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 00:19:37.347106 systemd-timesyncd[1397]: Initial clock synchronization to Tue 2026-04-28 00:19:37.346403 UTC. Apr 28 00:19:37.356346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:19:37.363441 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 00:19:37.366257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:19:37.376143 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 00:19:37.376403 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 00:19:37.378518 kernel: ACPI: button: Power Button [PWRF] Apr 28 00:19:37.378534 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 00:19:37.400263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:19:37.408327 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 00:19:37.569986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:19:37.696567 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:19:37.865075 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 00:19:37.897772 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 00:19:37.965150 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:19:38.061330 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 00:19:38.063911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:38.090113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:19:38.097635 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:19:38.122336 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:19:38.141534 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:19:38.167046 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:19:38.173394 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:19:38.175323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:19:38.206114 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:19:38.208860 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:19:38.210644 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:19:38.220132 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:19:38.224987 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:19:38.246210 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:19:38.249905 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 00:19:38.251950 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:19:38.253574 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:19:38.254922 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:19:38.255041 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:19:38.255066 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:19:38.256468 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:19:38.263464 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:19:38.270431 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:19:38.275532 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:19:38.278883 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:19:38.282741 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:19:38.283470 jq[1429]: false Apr 28 00:19:38.286871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:19:38.302830 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:19:38.303636 extend-filesystems[1430]: Found loop3 Apr 28 00:19:38.303636 extend-filesystems[1430]: Found loop4 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found loop5 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found sr0 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda1 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda2 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda3 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found usr Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda4 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda6 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda7 Apr 28 00:19:38.309349 extend-filesystems[1430]: Found vda9 Apr 28 00:19:38.309349 extend-filesystems[1430]: Checking size of /dev/vda9 Apr 28 00:19:38.306406 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:19:38.314695 dbus-daemon[1428]: [system] SELinux support is enabled Apr 28 00:19:38.333089 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:19:38.342954 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:19:38.344977 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:19:38.345820 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 00:19:38.346870 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:19:38.352857 extend-filesystems[1430]: Resized partition /dev/vda9 Apr 28 00:19:38.355458 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:19:38.358433 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:19:38.365707 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Apr 28 00:19:38.379877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1366) Apr 28 00:19:38.371208 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 00:19:38.383188 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:19:38.383374 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:19:38.383603 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:19:38.383693 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 00:19:38.383772 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:19:38.392579 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:19:38.392773 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:19:38.397975 jq[1449]: true Apr 28 00:19:38.407738 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 00:19:38.425809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:19:38.425915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:19:38.427835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:19:38.427852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:19:38.434682 update_engine[1448]: I20260428 00:19:38.433508 1448 main.cc:92] Flatcar Update Engine starting Apr 28 00:19:38.582119 tar[1453]: linux-amd64/LICENSE Apr 28 00:19:38.582119 tar[1453]: linux-amd64/helm Apr 28 00:19:38.604414 jq[1456]: true Apr 28 00:19:38.617983 update_engine[1448]: I20260428 00:19:38.615971 1448 update_check_scheduler.cc:74] Next update check in 8m50s Apr 28 00:19:38.619802 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 00:19:38.656537 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 00:19:38.656712 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 00:19:38.656712 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 00:19:38.656712 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 00:19:38.622135 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 00:19:38.681927 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Apr 28 00:19:38.626771 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:19:38.633728 systemd-logind[1444]: New seat seat0. Apr 28 00:19:38.656268 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:19:38.658084 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:19:38.658321 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:19:38.689354 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:19:38.710118 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:19:38.744042 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:19:38.745617 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:19:38.754624 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 00:19:38.804067 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:19:38.885918 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:19:38.900097 systemd-networkd[1388]: eth0: Gained IPv6LL Apr 28 00:19:38.908592 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:19:38.962593 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:19:38.969972 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:19:38.994345 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 00:19:39.145028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:39.152221 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:19:39.157203 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:19:39.157481 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:19:39.216274 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:19:39.241593 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:19:39.262140 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 00:19:39.264823 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 00:19:39.335542 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:19:39.344546 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:19:39.365968 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:19:39.433036 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 00:19:39.445180 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:19:39.472028 containerd[1454]: time="2026-04-28T00:19:39.471758093Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 00:19:39.883675 containerd[1454]: time="2026-04-28T00:19:39.883007478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887037 containerd[1454]: time="2026-04-28T00:19:39.886956718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887037 containerd[1454]: time="2026-04-28T00:19:39.887014507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 00:19:39.887135 containerd[1454]: time="2026-04-28T00:19:39.887076238Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 00:19:39.887476 containerd[1454]: time="2026-04-28T00:19:39.887431238Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 00:19:39.887476 containerd[1454]: time="2026-04-28T00:19:39.887459314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887557 containerd[1454]: time="2026-04-28T00:19:39.887519644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887557 containerd[1454]: time="2026-04-28T00:19:39.887539363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887801 containerd[1454]: time="2026-04-28T00:19:39.887759169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887821 containerd[1454]: time="2026-04-28T00:19:39.887811463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887845 containerd[1454]: time="2026-04-28T00:19:39.887823507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887845 containerd[1454]: time="2026-04-28T00:19:39.887831029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.887933 containerd[1454]: time="2026-04-28T00:19:39.887903774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.888204 containerd[1454]: time="2026-04-28T00:19:39.888174924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:19:39.888294 containerd[1454]: time="2026-04-28T00:19:39.888276085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:19:39.888324 containerd[1454]: time="2026-04-28T00:19:39.888294678Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 00:19:39.888393 containerd[1454]: time="2026-04-28T00:19:39.888377193Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 00:19:39.888437 containerd[1454]: time="2026-04-28T00:19:39.888421989Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.906892647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.907232048Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.907336081Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.907393090Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.907414662Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.908245506Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.908799337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909002364Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909017762Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909027193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909039014Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909063408Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909085795Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.909042 containerd[1454]: time="2026-04-28T00:19:39.909115582Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909137841Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909149980Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909166483Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909184489Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909202454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909294437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909376348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909401754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909418999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909443365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909469251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909495304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909506549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914149 containerd[1454]: time="2026-04-28T00:19:39.909524517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909535091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909550455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909566791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909581872Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909610556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909621533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909629609Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909802171Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909819697Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909829300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909838264Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909845471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909862667Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 00:19:39.914433 containerd[1454]: time="2026-04-28T00:19:39.909871838Z" level=info msg="NRI interface is disabled by configuration." Apr 28 00:19:39.914773 containerd[1454]: time="2026-04-28T00:19:39.909889341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 00:19:39.914794 containerd[1454]: time="2026-04-28T00:19:39.913483652Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 00:19:39.914794 containerd[1454]: time="2026-04-28T00:19:39.913813096Z" level=info msg="Connect containerd service" Apr 28 00:19:39.914794 containerd[1454]: time="2026-04-28T00:19:39.913918548Z" level=info msg="using legacy CRI server" Apr 28 00:19:39.914794 containerd[1454]: time="2026-04-28T00:19:39.913925470Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:19:39.914794 containerd[1454]: time="2026-04-28T00:19:39.914178513Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 00:19:39.916044 containerd[1454]: time="2026-04-28T00:19:39.915956706Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:19:39.916175 containerd[1454]: time="2026-04-28T00:19:39.916111645Z" level=info msg="Start subscribing containerd event" Apr 28 00:19:39.916901 containerd[1454]: time="2026-04-28T00:19:39.916202699Z" level=info msg="Start recovering state" Apr 28 00:19:39.917160 containerd[1454]: time="2026-04-28T00:19:39.917070467Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:19:39.917160 containerd[1454]: time="2026-04-28T00:19:39.917122780Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:19:39.928110 containerd[1454]: time="2026-04-28T00:19:39.925848241Z" level=info msg="Start event monitor" Apr 28 00:19:39.932448 containerd[1454]: time="2026-04-28T00:19:39.925885488Z" level=info msg="Start snapshots syncer" Apr 28 00:19:39.935851 containerd[1454]: time="2026-04-28T00:19:39.932555001Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:19:39.935965 containerd[1454]: time="2026-04-28T00:19:39.935952437Z" level=info msg="Start streaming server" Apr 28 00:19:39.936168 containerd[1454]: time="2026-04-28T00:19:39.936157793Z" level=info msg="containerd successfully booted in 0.466255s" Apr 28 00:19:39.936845 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:19:40.906618 tar[1453]: linux-amd64/README.md Apr 28 00:19:41.098464 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:19:46.555951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:46.564900 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:19:46.566004 systemd[1]: Startup finished in 1.967s (kernel) + 14.399s (initrd) + 14.815s (userspace) = 31.183s. Apr 28 00:19:46.611941 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:19:47.485924 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:19:47.501521 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:47772.service - OpenSSH per-connection server daemon (10.0.0.1:47772). Apr 28 00:19:48.139716 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 47772 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:19:48.153938 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:48.688255 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:19:48.781271 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:19:49.036060 systemd-logind[1444]: New session 1 of user core. Apr 28 00:19:49.263983 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:19:49.385838 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:19:49.656997 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 00:19:50.803145 systemd[1556]: Queued start job for default target default.target. Apr 28 00:19:50.882369 systemd[1556]: Created slice app.slice - User Application Slice. Apr 28 00:19:50.882473 systemd[1556]: Reached target paths.target - Paths. Apr 28 00:19:50.882494 systemd[1556]: Reached target timers.target - Timers. Apr 28 00:19:50.953702 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:19:52.037446 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:19:52.037588 systemd[1556]: Reached target sockets.target - Sockets. Apr 28 00:19:52.037603 systemd[1556]: Reached target basic.target - Basic System. Apr 28 00:19:52.037644 systemd[1556]: Reached target default.target - Main User Target. Apr 28 00:19:52.037713 systemd[1556]: Startup finished in 2.217s. Apr 28 00:19:52.038296 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:19:52.073903 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:19:52.311920 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Apr 28 00:19:52.821549 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:19:52.859376 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:52.943494 systemd-logind[1444]: New session 2 of user core. Apr 28 00:19:52.983019 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 00:19:53.449906 sshd[1568]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:53.646570 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:35052.service: Deactivated successfully. Apr 28 00:19:53.813035 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 00:19:53.815883 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Apr 28 00:19:53.840715 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:35058.service - OpenSSH per-connection server daemon (10.0.0.1:35058). Apr 28 00:19:53.842951 systemd-logind[1444]: Removed session 2. Apr 28 00:19:53.851449 kubelet[1545]: E0428 00:19:53.848116 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:19:53.858031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:19:53.858214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:19:53.858621 systemd[1]: kubelet.service: Consumed 9.657s CPU time. Apr 28 00:19:54.240847 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 35058 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:19:54.254810 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:54.444085 systemd-logind[1444]: New session 3 of user core. Apr 28 00:19:54.471147 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:19:54.702728 sshd[1575]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:54.797112 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:35058.service: Deactivated successfully. Apr 28 00:19:54.822562 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:19:54.997375 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:19:55.216188 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:35064.service - OpenSSH per-connection server daemon (10.0.0.1:35064). Apr 28 00:19:55.226156 systemd-logind[1444]: Removed session 3. Apr 28 00:19:55.670695 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 35064 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:19:55.796445 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:55.900599 systemd-logind[1444]: New session 4 of user core. Apr 28 00:19:55.978328 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:19:56.241047 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:56.374478 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:35064.service: Deactivated successfully. Apr 28 00:19:56.415309 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:19:56.437939 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:19:56.475156 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:35070.service - OpenSSH per-connection server daemon (10.0.0.1:35070). Apr 28 00:19:56.482023 systemd-logind[1444]: Removed session 4. Apr 28 00:19:56.918045 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 35070 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:19:57.014249 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:57.232620 systemd-logind[1444]: New session 5 of user core. Apr 28 00:19:57.289719 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:19:57.451153 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 00:19:57.453367 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:19:57.696212 sudo[1594]: pam_unix(sudo:session): session closed for user root Apr 28 00:19:57.780846 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:57.912368 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:35070.service: Deactivated successfully. Apr 28 00:19:57.914922 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:19:57.915863 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:19:57.940705 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:35086.service - OpenSSH per-connection server daemon (10.0.0.1:35086). Apr 28 00:19:57.942723 systemd-logind[1444]: Removed session 5. Apr 28 00:19:58.154578 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 35086 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:19:58.165093 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:58.448834 systemd-logind[1444]: New session 6 of user core. Apr 28 00:19:58.463745 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:19:59.720105 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 00:19:59.720393 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:00.065208 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 28 00:20:00.221518 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 00:20:00.221802 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:00.501156 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 00:20:00.652744 auditctl[1606]: No rules Apr 28 00:20:00.686875 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:20:00.687722 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 00:20:00.733895 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:20:01.081359 augenrules[1624]: No rules Apr 28 00:20:01.128966 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:20:01.131282 sudo[1602]: pam_unix(sudo:session): session closed for user root Apr 28 00:20:01.143115 sshd[1599]: pam_unix(sshd:session): session closed for user core Apr 28 00:20:01.171989 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:35086.service: Deactivated successfully. Apr 28 00:20:01.174004 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:20:01.201037 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:20:01.219257 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). Apr 28 00:20:01.225100 systemd-logind[1444]: Removed session 6. Apr 28 00:20:01.591695 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:20:01.661503 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:20:01.925969 systemd-logind[1444]: New session 7 of user core. Apr 28 00:20:01.948429 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:20:02.125349 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:20:02.125686 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:04.267247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:20:04.402878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:06.255029 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:20:06.275262 (dockerd)[1656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:20:07.150229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:07.157180 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:20:09.878089 kubelet[1661]: E0428 00:20:09.877635 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:20:09.955862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:20:09.959608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:20:10.069256 systemd[1]: kubelet.service: Consumed 3.513s CPU time. Apr 28 00:20:10.488319 dockerd[1656]: time="2026-04-28T00:20:10.487757977Z" level=info msg="Starting up" Apr 28 00:20:12.805754 dockerd[1656]: time="2026-04-28T00:20:12.803424135Z" level=info msg="Loading containers: start." Apr 28 00:20:16.228949 kernel: Initializing XFRM netlink socket Apr 28 00:20:18.789692 systemd-networkd[1388]: docker0: Link UP Apr 28 00:20:20.185750 dockerd[1656]: time="2026-04-28T00:20:20.183220719Z" level=info msg="Loading containers: done." Apr 28 00:20:20.376375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:20:20.673140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:20.739981 dockerd[1656]: time="2026-04-28T00:20:20.738750267Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:20:20.781858 dockerd[1656]: time="2026-04-28T00:20:20.777192367Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 00:20:20.784337 dockerd[1656]: time="2026-04-28T00:20:20.783327292Z" level=info msg="Daemon has completed initialization" Apr 28 00:20:22.673577 dockerd[1656]: time="2026-04-28T00:20:22.671531379Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:20:22.676644 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:20:23.396410 update_engine[1448]: I20260428 00:20:23.392062 1448 update_attempter.cc:509] Updating boot flags... Apr 28 00:20:23.848807 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1820) Apr 28 00:20:24.594512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:24.775574 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:20:24.911396 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1818) Apr 28 00:20:25.773208 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1818) Apr 28 00:20:30.356820 kubelet[1833]: E0428 00:20:30.356532 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:20:30.399820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:20:30.400161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:20:30.400864 systemd[1]: kubelet.service: Consumed 5.317s CPU time. Apr 28 00:20:36.615056 containerd[1454]: time="2026-04-28T00:20:36.614545906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 00:20:40.489881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:20:40.746953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:42.604940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488928655.mount: Deactivated successfully. Apr 28 00:20:43.422946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:43.448276 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:20:48.171808 kubelet[1867]: E0428 00:20:48.166397 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:20:48.197007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:20:48.200064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:20:48.203006 systemd[1]: kubelet.service: Consumed 4.983s CPU time. Apr 28 00:20:58.450587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 00:20:58.635346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:01.502189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:01.620358 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:04.936320 kubelet[1931]: E0428 00:21:04.907757 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:04.941972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:04.942195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:04.942714 systemd[1]: kubelet.service: Consumed 4.276s CPU time. Apr 28 00:21:05.673547 containerd[1454]: time="2026-04-28T00:21:05.672777000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:05.706882 containerd[1454]: time="2026-04-28T00:21:05.706123732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 28 00:21:05.805433 containerd[1454]: time="2026-04-28T00:21:05.804556723Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:06.073906 containerd[1454]: time="2026-04-28T00:21:06.062634879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:06.265912 containerd[1454]: time="2026-04-28T00:21:06.262221454Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 29.64720437s" Apr 28 00:21:06.265912 containerd[1454]: time="2026-04-28T00:21:06.266212907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 00:21:06.298592 containerd[1454]: time="2026-04-28T00:21:06.298262150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 00:21:16.006588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 28 00:21:16.685988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:20.582640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:20.609415 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:24.237595 containerd[1454]: time="2026-04-28T00:21:24.236484433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:24.336285 containerd[1454]: time="2026-04-28T00:21:24.255939239Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 28 00:21:24.473298 containerd[1454]: time="2026-04-28T00:21:24.466411411Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:25.689933 containerd[1454]: time="2026-04-28T00:21:25.685580045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:26.150929 containerd[1454]: time="2026-04-28T00:21:26.142126737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 19.843566501s" Apr 28 00:21:26.150929 containerd[1454]: time="2026-04-28T00:21:26.144898883Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 00:21:26.230188 containerd[1454]: time="2026-04-28T00:21:26.229039168Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 00:21:30.792780 kubelet[1950]: E0428 00:21:30.791482 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:30.821853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:30.823514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:30.824735 systemd[1]: kubelet.service: Consumed 9.015s CPU time. Apr 28 00:21:35.539249 containerd[1454]: time="2026-04-28T00:21:35.536635859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:35.550288 containerd[1454]: time="2026-04-28T00:21:35.543907913Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 28 00:21:35.564922 containerd[1454]: time="2026-04-28T00:21:35.563387947Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:35.838413 containerd[1454]: time="2026-04-28T00:21:35.835402696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:35.978082 containerd[1454]: time="2026-04-28T00:21:35.975286410Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 9.743929074s" Apr 28 00:21:35.986035 containerd[1454]: time="2026-04-28T00:21:35.978399524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 00:21:36.042731 containerd[1454]: time="2026-04-28T00:21:36.042318425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 00:21:40.919386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 28 00:21:40.935226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:42.870626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:42.894645 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:44.264959 kubelet[1975]: E0428 00:21:44.264334 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:44.406387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:44.409005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:44.425031 systemd[1]: kubelet.service: Consumed 2.181s CPU time. Apr 28 00:21:54.616585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 28 00:21:54.802485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:58.193992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:58.213895 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:58.897480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753909429.mount: Deactivated successfully. Apr 28 00:22:02.453422 kubelet[1991]: E0428 00:22:02.453072 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:02.544165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:02.544638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:02.558105 systemd[1]: kubelet.service: Consumed 4.922s CPU time. Apr 28 00:22:07.308268 containerd[1454]: time="2026-04-28T00:22:07.300526505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:07.364818 containerd[1454]: time="2026-04-28T00:22:07.336394158Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 28 00:22:07.366015 containerd[1454]: time="2026-04-28T00:22:07.365836508Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:08.592897 containerd[1454]: time="2026-04-28T00:22:08.591920924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:08.742060 containerd[1454]: time="2026-04-28T00:22:08.739813029Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 32.697252614s" Apr 28 00:22:08.742060 containerd[1454]: time="2026-04-28T00:22:08.739938758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 00:22:08.806599 containerd[1454]: time="2026-04-28T00:22:08.805986019Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 00:22:12.774288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 28 00:22:12.959475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:22:16.119241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842588704.mount: Deactivated successfully. Apr 28 00:22:17.830908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:22:17.978987 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:22:31.440000 kubelet[2017]: E0428 00:22:31.439442 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:31.544734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:31.584929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:31.682870 systemd[1]: kubelet.service: Consumed 11.743s CPU time. Apr 28 00:22:41.804376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 28 00:22:41.843108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:22:46.483851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:22:46.526159 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:22:51.863690 kubelet[2065]: E0428 00:22:51.863285 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:51.895078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:51.897207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:51.917307 systemd[1]: kubelet.service: Consumed 6.014s CPU time. Apr 28 00:22:56.928281 containerd[1454]: time="2026-04-28T00:22:56.927413476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:56.949076 containerd[1454]: time="2026-04-28T00:22:56.935416218Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 28 00:22:57.317255 containerd[1454]: time="2026-04-28T00:22:57.311299463Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:58.201529 containerd[1454]: time="2026-04-28T00:22:58.198238136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:58.464998 containerd[1454]: time="2026-04-28T00:22:58.454182039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 49.647877088s" Apr 28 00:22:58.464998 containerd[1454]: time="2026-04-28T00:22:58.455568485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 00:22:58.578517 containerd[1454]: time="2026-04-28T00:22:58.578154110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 00:23:02.222604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 28 00:23:02.535018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:23:04.870145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366794152.mount: Deactivated successfully. Apr 28 00:23:05.039367 containerd[1454]: time="2026-04-28T00:23:04.985219440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 28 00:23:05.039367 containerd[1454]: time="2026-04-28T00:23:05.035443620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:05.132629 containerd[1454]: time="2026-04-28T00:23:05.131624496Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:06.406056 containerd[1454]: time="2026-04-28T00:23:06.404121974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:06.809479 containerd[1454]: time="2026-04-28T00:23:06.804475277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 8.223532351s" Apr 28 00:23:06.826921 containerd[1454]: time="2026-04-28T00:23:06.810593996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 00:23:06.867637 containerd[1454]: time="2026-04-28T00:23:06.867481009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 00:23:09.085040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:23:09.280295 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:23:11.810854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487234687.mount: Deactivated successfully. Apr 28 00:23:13.832028 kubelet[2101]: E0428 00:23:13.827398 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:23:13.844742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:23:13.844982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:23:13.968278 systemd[1]: kubelet.service: Consumed 6.380s CPU time. Apr 28 00:23:24.042540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 28 00:23:24.184080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:23:26.216290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:23:26.251357 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:23:30.699266 containerd[1454]: time="2026-04-28T00:23:30.697835410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:30.721554 containerd[1454]: time="2026-04-28T00:23:30.719450528Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 28 00:23:30.722154 kubelet[2171]: E0428 00:23:30.722071 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:23:30.722887 containerd[1454]: time="2026-04-28T00:23:30.722822275Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:30.745235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:23:30.811534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:23:30.872190 systemd[1]: kubelet.service: Consumed 4.559s CPU time. Apr 28 00:23:30.997544 containerd[1454]: time="2026-04-28T00:23:30.997046841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:30.997974 containerd[1454]: time="2026-04-28T00:23:30.997811326Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 24.130234232s" Apr 28 00:23:30.997974 containerd[1454]: time="2026-04-28T00:23:30.997920753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 00:23:42.185706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 28 00:23:43.100588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:24:00.223783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:24:00.265867 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:24:16.564563 kubelet[2209]: E0428 00:24:16.490257 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:24:16.703224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:24:16.723985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:24:16.745527 systemd[1]: kubelet.service: Consumed 17.298s CPU time. Apr 28 00:24:27.249141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 28 00:24:27.973269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:24:39.656499 containerd[1454]: time="2026-04-28T00:24:39.651746794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\"" Apr 28 00:24:43.900410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:24:44.000090 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:25:18.252403 kubelet[2243]: E0428 00:25:18.242378 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:25:18.306508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:25:18.307633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:25:18.412603 systemd[1]: kubelet.service: Consumed 30.182s CPU time, 5.5M memory peak, 0B memory swap peak. Apr 28 00:25:24.674202 containerd[1454]: time="2026-04-28T00:25:24.660133810Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.8: active requests=0, bytes read=29285913" Apr 28 00:25:24.688034 containerd[1454]: time="2026-04-28T00:25:24.660151656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:25:24.834888 containerd[1454]: time="2026-04-28T00:25:24.833175134Z" level=info msg="ImageCreate event name:\"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:25:26.525078 containerd[1454]: time="2026-04-28T00:25:26.522328453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d1f1afdd389ba0b99233830af563d7da79484b8bae6ff905d6edbcb419127bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:25:27.366340 containerd[1454]: time="2026-04-28T00:25:27.364240753Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.8\" with image id \"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d1f1afdd389ba0b99233830af563d7da79484b8bae6ff905d6edbcb419127bd\", size \"30111158\" in 47.7120327s" Apr 28 00:25:27.368997 containerd[1454]: time="2026-04-28T00:25:27.367613275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\" returns image reference \"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\"" Apr 28 00:25:27.891060 containerd[1454]: time="2026-04-28T00:25:27.890550089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.8\"" Apr 28 00:25:28.668105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 28 00:25:28.945884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:25:34.200599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:25:34.387202 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:25:50.180331 kubelet[2281]: E0428 00:25:50.173339 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:25:50.273354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:25:50.279927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:25:50.307605 systemd[1]: kubelet.service: Consumed 13.980s CPU time. Apr 28 00:25:55.680527 containerd[1454]: time="2026-04-28T00:25:55.679071257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:25:55.806136 containerd[1454]: time="2026-04-28T00:25:55.765255276Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.8: active requests=0, bytes read=26021560" Apr 28 00:25:56.163417 containerd[1454]: time="2026-04-28T00:25:56.158843942Z" level=info msg="ImageCreate event name:\"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:25:59.485574 containerd[1454]: time="2026-04-28T00:25:59.467344860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4b93c08a1d78c2065518e8bbcad3132beafab937a9fd0771c82cdb63d2a050b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:00.040775 containerd[1454]: time="2026-04-28T00:26:00.039878531Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.8\" with image id \"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4b93c08a1d78c2065518e8bbcad3132beafab937a9fd0771c82cdb63d2a050b8\", size \"27678578\" in 32.12910245s" Apr 28 00:26:00.040775 containerd[1454]: time="2026-04-28T00:26:00.040720069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.8\" returns image reference \"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\"" Apr 28 00:26:01.232847 containerd[1454]: time="2026-04-28T00:26:01.205637356Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.8\"" Apr 28 00:26:01.418455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 28 00:26:02.251113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:26:10.506367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:26:10.677824 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:26:21.093128 kubelet[2302]: E0428 00:26:21.078821 2302 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:26:21.271606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:26:21.272070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:26:21.280393 systemd[1]: kubelet.service: Consumed 11.281s CPU time. Apr 28 00:26:22.055295 containerd[1454]: time="2026-04-28T00:26:22.051249123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:22.059158 containerd[1454]: time="2026-04-28T00:26:22.058037229Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.8: active requests=0, bytes read=20160949" Apr 28 00:26:22.092707 containerd[1454]: time="2026-04-28T00:26:22.090370147Z" level=info msg="ImageCreate event name:\"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:22.550079 containerd[1454]: time="2026-04-28T00:26:22.548936805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f6c5eae3f9f702a0c00e5c52aa040b2c685acfc9fd8d2646f150a183de36e72f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:22.796130 containerd[1454]: time="2026-04-28T00:26:22.793840907Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.8\" with image id \"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f6c5eae3f9f702a0c00e5c52aa040b2c685acfc9fd8d2646f150a183de36e72f\", size \"21817985\" in 21.561011762s" Apr 28 00:26:22.842492 containerd[1454]: time="2026-04-28T00:26:22.809095846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.8\" returns image reference \"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\"" Apr 28 00:26:23.353836 containerd[1454]: time="2026-04-28T00:26:23.350937088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.8\"" Apr 28 00:26:31.749261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 28 00:26:36.672966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:26:44.341897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:26:44.345623 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:26:50.843986 kubelet[2326]: E0428 00:26:50.842348 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:26:50.922420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:26:50.927851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:26:50.929350 systemd[1]: kubelet.service: Consumed 8.970s CPU time. Apr 28 00:27:01.373430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 28 00:27:01.593004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:07.693418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:27:07.972646 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:27:14.022083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247990263.mount: Deactivated successfully. Apr 28 00:27:17.395365 kubelet[2345]: E0428 00:27:17.385128 2345 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:27:17.439007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:27:17.440555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:27:17.498635 systemd[1]: kubelet.service: Consumed 10.169s CPU time. Apr 28 00:27:22.839856 containerd[1454]: time="2026-04-28T00:27:22.836491402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.8: active requests=0, bytes read=31828042" Apr 28 00:27:22.839856 containerd[1454]: time="2026-04-28T00:27:22.838879340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:27:22.966115 containerd[1454]: time="2026-04-28T00:27:22.956904599Z" level=info msg="ImageCreate event name:\"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:27:23.588475 containerd[1454]: time="2026-04-28T00:27:23.582353820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:00c5df7707d5fc1f8b2c95cf71ec8ea82fd27a01af1b720e1f252ece4f71b17c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:27:24.238037 containerd[1454]: time="2026-04-28T00:27:24.237515081Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.8\" with image id \"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\", repo tag \"registry.k8s.io/kube-proxy:v1.33.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:00c5df7707d5fc1f8b2c95cf71ec8ea82fd27a01af1b720e1f252ece4f71b17c\", size \"31827167\" in 1m0.883165758s" Apr 28 00:27:24.238037 containerd[1454]: time="2026-04-28T00:27:24.237777408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.8\" returns image reference \"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\"" Apr 28 00:27:27.852402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 28 00:27:28.093617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:33.155692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:27:33.859929 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:27:48.253476 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:48.539901 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:27:48.632796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:27:48.655489 systemd[1]: kubelet.service: Consumed 13.589s CPU time, 112.9M memory peak, 0B memory swap peak. Apr 28 00:27:49.879696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:28:03.409037 systemd[1]: Reloading requested from client PID 2382 ('systemctl') (unit session-7.scope)... Apr 28 00:28:03.409444 systemd[1]: Reloading... Apr 28 00:28:28.524764 update_engine[1448]: I20260428 00:28:28.521831 1448 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 00:28:28.540388 update_engine[1448]: I20260428 00:28:28.531387 1448 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 00:28:28.837720 update_engine[1448]: I20260428 00:28:28.824365 1448 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 00:28:29.132047 update_engine[1448]: I20260428 00:28:29.067615 1448 omaha_request_params.cc:62] Current group set to lts Apr 28 00:28:29.278866 update_engine[1448]: I20260428 00:28:29.193440 1448 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 00:28:29.278866 update_engine[1448]: I20260428 00:28:29.269018 1448 update_attempter.cc:643] Scheduling an action processor start. Apr 28 00:28:29.390307 update_engine[1448]: I20260428 00:28:29.386214 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:28:29.391074 locksmithd[1480]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 00:28:29.598924 update_engine[1448]: I20260428 00:28:29.461887 1448 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 00:28:29.598924 update_engine[1448]: I20260428 00:28:29.497783 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:28:29.598924 update_engine[1448]: I20260428 00:28:29.497967 1448 omaha_request_action.cc:272] Request: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: Apr 28 00:28:29.598924 update_engine[1448]: I20260428 00:28:29.497978 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:28:29.896497 update_engine[1448]: I20260428 00:28:29.894461 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:28:30.091438 update_engine[1448]: I20260428 00:28:30.073073 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:28:30.125597 update_engine[1448]: E20260428 00:28:30.119300 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:28:30.175986 update_engine[1448]: I20260428 00:28:30.172716 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 00:28:31.084576 zram_generator::config[2422]: No configuration found. Apr 28 00:28:40.406402 update_engine[1448]: I20260428 00:28:40.401404 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:28:40.472922 update_engine[1448]: I20260428 00:28:40.407021 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:28:40.472922 update_engine[1448]: I20260428 00:28:40.407522 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:28:40.518175 update_engine[1448]: E20260428 00:28:40.489264 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:28:40.523144 update_engine[1448]: I20260428 00:28:40.522931 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 00:28:50.423519 update_engine[1448]: I20260428 00:28:50.408970 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:28:50.446328 update_engine[1448]: I20260428 00:28:50.445940 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:28:50.453150 update_engine[1448]: I20260428 00:28:50.451592 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:28:50.539166 update_engine[1448]: E20260428 00:28:50.460502 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:28:50.546276 update_engine[1448]: I20260428 00:28:50.542237 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 00:29:00.499107 update_engine[1448]: I20260428 00:29:00.490619 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:29:00.548887 update_engine[1448]: I20260428 00:29:00.523260 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:29:00.548887 update_engine[1448]: I20260428 00:29:00.531878 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:29:00.570695 update_engine[1448]: E20260428 00:29:00.569505 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:29:00.572081 update_engine[1448]: I20260428 00:29:00.571368 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:29:00.572081 update_engine[1448]: I20260428 00:29:00.571465 1448 omaha_request_action.cc:617] Omaha request response: Apr 28 00:29:00.572526 update_engine[1448]: E20260428 00:29:00.572438 1448 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 00:29:00.572821 update_engine[1448]: I20260428 00:29:00.572768 1448 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 00:29:00.572821 update_engine[1448]: I20260428 00:29:00.572788 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:29:00.572821 update_engine[1448]: I20260428 00:29:00.572795 1448 update_attempter.cc:306] Processing Done. Apr 28 00:29:00.572936 update_engine[1448]: E20260428 00:29:00.572883 1448 update_attempter.cc:619] Update failed. Apr 28 00:29:00.572936 update_engine[1448]: I20260428 00:29:00.572891 1448 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 00:29:00.572936 update_engine[1448]: I20260428 00:29:00.572898 1448 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 00:29:00.572936 update_engine[1448]: I20260428 00:29:00.572906 1448 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 00:29:00.603319 update_engine[1448]: I20260428 00:29:00.586232 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:29:00.672358 update_engine[1448]: I20260428 00:29:00.610301 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:29:00.672358 update_engine[1448]: I20260428 00:29:00.670262 1448 omaha_request_action.cc:272] Request: Apr 28 00:29:00.672358 update_engine[1448]: Apr 28 00:29:00.672358 update_engine[1448]: Apr 28 00:29:00.672358 update_engine[1448]: Apr 28 00:29:00.672358 update_engine[1448]: Apr 28 00:29:00.672358 update_engine[1448]: Apr 28 00:29:00.672358 update_engine[1448]: Apr 28 00:29:00.739509 update_engine[1448]: I20260428 00:29:00.672485 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:29:00.745358 locksmithd[1480]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 00:29:00.752200 update_engine[1448]: I20260428 00:29:00.740159 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:29:00.752200 update_engine[1448]: I20260428 00:29:00.748907 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:29:00.876354 update_engine[1448]: E20260428 00:29:00.867293 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:29:00.938389 update_engine[1448]: I20260428 00:29:00.921314 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:29:00.938389 update_engine[1448]: I20260428 00:29:00.938292 1448 omaha_request_action.cc:617] Omaha request response: Apr 28 00:29:00.957950 update_engine[1448]: I20260428 00:29:00.943447 1448 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:29:00.957950 update_engine[1448]: I20260428 00:29:00.954706 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:29:00.957950 update_engine[1448]: I20260428 00:29:00.955508 1448 update_attempter.cc:306] Processing Done. Apr 28 00:29:00.957950 update_engine[1448]: I20260428 00:29:00.956983 1448 update_attempter.cc:310] Error event sent. Apr 28 00:29:00.958458 update_engine[1448]: I20260428 00:29:00.958021 1448 update_check_scheduler.cc:74] Next update check in 43m16s Apr 28 00:29:01.102649 locksmithd[1480]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 00:29:29.331251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:29:48.560002 systemd[1]: Reloading finished in 105147 ms. Apr 28 00:29:57.521441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:29:58.022030 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:29:58.584355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:29:58.598714 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:29:58.599648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:29:58.599926 systemd[1]: kubelet.service: Consumed 10.784s CPU time, 51.4M memory peak, 0B memory swap peak. Apr 28 00:29:59.509365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:30:11.305528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:30:11.542362 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:30:16.736453 kubelet[2481]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:30:16.736453 kubelet[2481]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:30:16.736453 kubelet[2481]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:30:16.740454 kubelet[2481]: I0428 00:30:16.740103 2481 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:30:19.432309 kubelet[2481]: I0428 00:30:19.431746 2481 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 00:30:19.432309 kubelet[2481]: I0428 00:30:19.432097 2481 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:30:19.440861 kubelet[2481]: I0428 00:30:19.440821 2481 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:30:20.109626 kubelet[2481]: E0428 00:30:20.107740 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:20.166598 kubelet[2481]: I0428 00:30:20.166324 2481 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:30:21.303840 kubelet[2481]: E0428 00:30:21.303268 2481 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:30:21.303840 kubelet[2481]: I0428 00:30:21.303774 2481 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 00:30:21.440817 kubelet[2481]: I0428 00:30:21.428618 2481 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 00:30:21.493957 kubelet[2481]: I0428 00:30:21.490093 2481 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:30:21.643430 kubelet[2481]: I0428 00:30:21.580503 2481 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:30:21.648457 kubelet[2481]: I0428 00:30:21.643292 2481 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:30:21.648457 kubelet[2481]: I0428 00:30:21.645472 2481 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 00:30:21.664521 kubelet[2481]: I0428 00:30:21.659973 2481 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:30:21.877041 kubelet[2481]: I0428 00:30:21.874202 2481 kubelet.go:480] "Attempting to sync node with API server" Apr 28 00:30:21.879212 kubelet[2481]: I0428 00:30:21.878729 2481 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:30:21.879350 kubelet[2481]: I0428 00:30:21.879235 2481 kubelet.go:386] "Adding apiserver pod source" Apr 28 00:30:21.879432 kubelet[2481]: I0428 00:30:21.879404 2481 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:30:21.891246 kubelet[2481]: E0428 00:30:21.890979 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:21.891246 kubelet[2481]: E0428 00:30:21.891078 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:21.944859 kubelet[2481]: I0428 00:30:21.944454 2481 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:30:21.946344 kubelet[2481]: I0428 00:30:21.946277 2481 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:30:21.947927 kubelet[2481]: W0428 00:30:21.947873 2481 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:30:21.953580 kubelet[2481]: I0428 00:30:21.953529 2481 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 00:30:21.953759 kubelet[2481]: I0428 00:30:21.953699 2481 server.go:1289] "Started kubelet" Apr 28 00:30:21.954730 kubelet[2481]: I0428 00:30:21.953942 2481 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:30:21.954730 kubelet[2481]: I0428 00:30:21.954488 2481 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:30:21.954730 kubelet[2481]: I0428 00:30:21.954543 2481 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:30:21.955142 kubelet[2481]: I0428 00:30:21.955124 2481 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:30:21.956411 kubelet[2481]: I0428 00:30:21.955645 2481 server.go:317] "Adding debug handlers to kubelet server" Apr 28 00:30:21.956477 kubelet[2481]: I0428 00:30:21.956413 2481 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:30:21.960488 kubelet[2481]: E0428 00:30:21.960472 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:21.960643 kubelet[2481]: I0428 00:30:21.960633 2481 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 00:30:21.962575 kubelet[2481]: E0428 00:30:21.959728 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:30:22.096951 kubelet[2481]: I0428 00:30:21.960952 2481 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 00:30:22.135992 kubelet[2481]: I0428 00:30:22.134248 2481 reconciler.go:26] "Reconciler: start to sync state" Apr 28 00:30:22.148284 kubelet[2481]: E0428 00:30:22.143066 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:22.148284 kubelet[2481]: E0428 00:30:22.143575 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Apr 28 00:30:22.336678 kubelet[2481]: E0428 00:30:22.327264 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:22.358093 kubelet[2481]: I0428 00:30:22.343755 2481 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:30:22.413522 kubelet[2481]: I0428 00:30:22.400461 2481 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:30:22.413522 kubelet[2481]: E0428 00:30:22.409610 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:22.559738 kubelet[2481]: E0428 00:30:22.498181 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:22.580120 kubelet[2481]: I0428 00:30:22.579533 2481 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:30:22.692649 kubelet[2481]: E0428 00:30:22.560095 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:22.698985 kubelet[2481]: E0428 00:30:22.698866 2481 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:30:22.703350 kubelet[2481]: E0428 00:30:22.699162 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:22.794955 kubelet[2481]: E0428 00:30:22.788961 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Apr 28 00:30:22.880776 kubelet[2481]: E0428 00:30:22.873963 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:22.944155 kubelet[2481]: I0428 00:30:22.941024 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 00:30:22.953189 kubelet[2481]: I0428 00:30:22.952719 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 00:30:22.953189 kubelet[2481]: I0428 00:30:22.953152 2481 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 00:30:22.958349 kubelet[2481]: I0428 00:30:22.953621 2481 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:30:22.958349 kubelet[2481]: I0428 00:30:22.953797 2481 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 00:30:22.958349 kubelet[2481]: E0428 00:30:22.953946 2481 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:30:22.988651 kubelet[2481]: E0428 00:30:22.987328 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:23.093512 kubelet[2481]: E0428 00:30:23.092047 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:23.094943 kubelet[2481]: E0428 00:30:23.092976 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:23.094943 kubelet[2481]: E0428 00:30:23.094327 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:23.098058 kubelet[2481]: E0428 00:30:23.098006 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:23.217597 kubelet[2481]: E0428 00:30:23.196217 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:23.388955 kubelet[2481]: E0428 00:30:23.387929 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:23.388955 kubelet[2481]: E0428 00:30:23.388960 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:23.490326 kubelet[2481]: E0428 00:30:23.389899 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:23.586324 kubelet[2481]: E0428 00:30:23.475750 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Apr 28 00:30:23.681426 kubelet[2481]: E0428 00:30:23.587594 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:23.770398 kubelet[2481]: E0428 00:30:23.754264 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:23.864305 kubelet[2481]: E0428 00:30:23.847511 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:23.976456 kubelet[2481]: E0428 00:30:23.974291 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:24.106307 kubelet[2481]: E0428 00:30:24.097716 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:24.249939 kubelet[2481]: E0428 00:30:24.247047 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:24.452407 kubelet[2481]: E0428 00:30:24.445102 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:24.544134 kubelet[2481]: I0428 00:30:24.472629 2481 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:30:24.544134 kubelet[2481]: I0428 00:30:24.473423 2481 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:30:24.544134 kubelet[2481]: I0428 00:30:24.489022 2481 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:30:24.544134 kubelet[2481]: E0428 00:30:24.512064 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:24.660846 kubelet[2481]: E0428 00:30:24.660605 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Apr 28 00:30:24.684306 kubelet[2481]: E0428 00:30:24.660840 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:24.804057 kubelet[2481]: E0428 00:30:24.682313 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:24.887790 kubelet[2481]: I0428 00:30:24.860633 2481 policy_none.go:49] "None policy: Start" Apr 28 00:30:25.065551 kubelet[2481]: I0428 00:30:25.013154 2481 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 00:30:25.065551 kubelet[2481]: I0428 00:30:25.045490 2481 state_mem.go:35] "Initializing new in-memory state store" Apr 28 00:30:25.214307 kubelet[2481]: E0428 00:30:25.070875 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:25.354618 kubelet[2481]: E0428 00:30:25.347717 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:25.443533 kubelet[2481]: E0428 00:30:25.437854 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:25.584459 kubelet[2481]: E0428 00:30:25.577451 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:25.792904 kubelet[2481]: E0428 00:30:25.703460 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:25.867130 kubelet[2481]: E0428 00:30:25.847541 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:26.002436 kubelet[2481]: E0428 00:30:25.991516 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:26.204918 kubelet[2481]: E0428 00:30:26.190081 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:26.244184 kubelet[2481]: E0428 00:30:26.243627 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:26.244184 kubelet[2481]: E0428 00:30:26.243708 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:26.410968 kubelet[2481]: E0428 00:30:26.408733 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:26.410968 kubelet[2481]: E0428 00:30:26.387093 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:26.582964 kubelet[2481]: E0428 00:30:26.464540 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="3.2s" Apr 28 00:30:26.582964 kubelet[2481]: E0428 00:30:26.573833 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:26.605909 kubelet[2481]: E0428 00:30:26.584058 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:26.709467 kubelet[2481]: E0428 00:30:26.700963 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:27.001450 kubelet[2481]: E0428 00:30:26.960328 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:27.172282 kubelet[2481]: E0428 00:30:27.084508 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:27.188369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:30:27.500883 kubelet[2481]: E0428 00:30:27.498486 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:27.781083 kubelet[2481]: E0428 00:30:27.774773 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:27.982872 kubelet[2481]: E0428 00:30:27.981430 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.099404 kubelet[2481]: E0428 00:30:27.995925 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:28.103326 kubelet[2481]: E0428 00:30:28.103002 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.205113 kubelet[2481]: E0428 00:30:28.204705 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.206421 kubelet[2481]: E0428 00:30:28.206315 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:28.337683 kubelet[2481]: E0428 00:30:28.336629 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.467530 kubelet[2481]: E0428 00:30:28.462427 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.701632 kubelet[2481]: E0428 00:30:28.692589 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.919293 kubelet[2481]: E0428 00:30:28.891038 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.091023 kubelet[2481]: E0428 00:30:29.077138 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.295471 kubelet[2481]: E0428 00:30:29.274169 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.498237 kubelet[2481]: E0428 00:30:29.496143 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.521267 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:30:29.667621 kubelet[2481]: E0428 00:30:29.662283 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.718876 kubelet[2481]: E0428 00:30:29.691708 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:29.896253 kubelet[2481]: E0428 00:30:29.890289 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.068458 kubelet[2481]: E0428 00:30:30.049264 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.166730 kubelet[2481]: E0428 00:30:30.166346 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.396287 kubelet[2481]: E0428 00:30:30.291027 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.535914 kubelet[2481]: E0428 00:30:30.524939 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="6.4s" Apr 28 00:30:30.746345 kubelet[2481]: E0428 00:30:30.745913 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:30.798036 kubelet[2481]: E0428 00:30:30.783108 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:31.331285 kubelet[2481]: E0428 00:30:31.281816 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:31.692302 kubelet[2481]: E0428 00:30:31.581452 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:31.849059 kubelet[2481]: E0428 00:30:31.792330 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.077224 kubelet[2481]: E0428 00:30:32.031227 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.351328 kubelet[2481]: E0428 00:30:32.308489 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.847205 kubelet[2481]: E0428 00:30:32.797026 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.982074 kubelet[2481]: E0428 00:30:32.961192 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.982074 kubelet[2481]: E0428 00:30:32.734335 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:30:33.209172 kubelet[2481]: E0428 00:30:33.150392 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:33.580126 kubelet[2481]: E0428 00:30:33.460148 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.089559 kubelet[2481]: E0428 00:30:33.389326 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:34.154334 kubelet[2481]: E0428 00:30:34.001516 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.694218 kubelet[2481]: E0428 00:30:34.684487 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:35.079037 kubelet[2481]: E0428 00:30:34.760528 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:35.350486 kubelet[2481]: E0428 00:30:34.781356 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:35.789158 kubelet[2481]: E0428 00:30:35.571463 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:36.159789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:30:37.055277 kubelet[2481]: E0428 00:30:37.053475 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:37.689383 kubelet[2481]: E0428 00:30:37.688408 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:38.193476 kubelet[2481]: E0428 00:30:38.187278 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:38.265540 kubelet[2481]: E0428 00:30:38.202584 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:38.376437 kubelet[2481]: E0428 00:30:38.179388 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:30:38.799565 kubelet[2481]: E0428 00:30:38.786077 2481 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:30:39.092088 kubelet[2481]: I0428 00:30:39.076327 2481 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:30:39.092088 kubelet[2481]: E0428 00:30:38.871472 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.249445 kubelet[2481]: E0428 00:30:39.246421 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.280470 kubelet[2481]: I0428 00:30:39.206361 2481 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:30:39.762704 kubelet[2481]: E0428 00:30:39.554521 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.969003 kubelet[2481]: E0428 00:30:39.861142 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:40.036946 kubelet[2481]: I0428 00:30:39.989645 2481 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:30:40.036946 kubelet[2481]: E0428 00:30:40.030521 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.154158 kubelet[2481]: E0428 00:30:41.148467 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:42.371481 kubelet[2481]: E0428 00:30:42.368177 2481 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:30:42.954377 kubelet[2481]: I0428 00:30:42.939781 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:30:43.586934 kubelet[2481]: E0428 00:30:43.303112 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:30:45.173151 kubelet[2481]: I0428 00:30:45.156260 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:30:50.315398 kubelet[2481]: E0428 00:30:50.306685 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:30:52.980596 kubelet[2481]: I0428 00:30:52.963537 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:30:54.430888 kubelet[2481]: E0428 00:30:53.683317 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:30:54.802221 kubelet[2481]: I0428 00:30:54.780215 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:30:55.804216 kubelet[2481]: I0428 00:30:55.499546 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:30:56.714863 kubelet[2481]: I0428 00:30:55.980576 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:30:58.427366 kubelet[2481]: E0428 00:30:58.426901 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:58.534868 kubelet[2481]: E0428 00:30:58.426892 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:58.534868 kubelet[2481]: E0428 00:30:58.240477 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:58.706427 kubelet[2481]: E0428 00:30:58.574387 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:30:58.744682 kubelet[2481]: E0428 00:30:58.744574 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:58.811925 kubelet[2481]: E0428 00:30:58.810903 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:30:58.868290 kubelet[2481]: E0428 00:30:58.845531 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:58.969044 kubelet[2481]: E0428 00:30:58.868510 2481 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:31:00.709281 kubelet[2481]: I0428 00:31:00.681363 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae88e85786a13701eebaf6993fb55ff4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ae88e85786a13701eebaf6993fb55ff4\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:31:00.794643 kubelet[2481]: I0428 00:31:00.793395 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:00.822133 kubelet[2481]: E0428 00:31:00.821449 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:01.962490 kubelet[2481]: I0428 00:31:01.942495 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/304f8fe43d8dae9fa1e91eba54f25a22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"304f8fe43d8dae9fa1e91eba54f25a22\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:31:02.152228 kubelet[2481]: I0428 00:31:02.151752 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/304f8fe43d8dae9fa1e91eba54f25a22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"304f8fe43d8dae9fa1e91eba54f25a22\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:31:02.268043 kubelet[2481]: I0428 00:31:02.256161 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/304f8fe43d8dae9fa1e91eba54f25a22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"304f8fe43d8dae9fa1e91eba54f25a22\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:31:04.605563 kubelet[2481]: E0428 00:31:04.275440 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:04.740983 kubelet[2481]: E0428 00:31:04.703282 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:05.065067 kubelet[2481]: I0428 00:31:05.063933 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:05.302956 kubelet[2481]: E0428 00:31:05.284581 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:05.649534 systemd[1]: Created slice kubepods-burstable-pod661aacf61b27dbeb7414ee44841cd3ce.slice - libcontainer container kubepods-burstable-pod661aacf61b27dbeb7414ee44841cd3ce.slice. Apr 28 00:31:05.788331 kubelet[2481]: E0428 00:31:05.788290 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:31:08.161185 kubelet[2481]: E0428 00:31:08.158608 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:31:08.635941 kubelet[2481]: E0428 00:31:08.633227 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:08.635941 kubelet[2481]: I0428 00:31:08.633624 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:09.158274 kubelet[2481]: E0428 00:31:09.157836 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:09.298409 containerd[1454]: time="2026-04-28T00:31:09.297652807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:661aacf61b27dbeb7414ee44841cd3ce,Namespace:kube-system,Attempt:0,}" Apr 28 00:31:09.994234 systemd[1]: Created slice kubepods-burstable-podae88e85786a13701eebaf6993fb55ff4.slice - libcontainer container kubepods-burstable-podae88e85786a13701eebaf6993fb55ff4.slice. Apr 28 00:31:10.569531 kubelet[2481]: E0428 00:31:10.557473 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:31:10.784070 kubelet[2481]: E0428 00:31:10.783988 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:10.793130 containerd[1454]: time="2026-04-28T00:31:10.793086429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ae88e85786a13701eebaf6993fb55ff4,Namespace:kube-system,Attempt:0,}" Apr 28 00:31:10.800314 kubelet[2481]: I0428 00:31:10.794156 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:11.203482 kubelet[2481]: E0428 00:31:11.201892 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:11.550722 systemd[1]: Created slice kubepods-burstable-pod304f8fe43d8dae9fa1e91eba54f25a22.slice - libcontainer container kubepods-burstable-pod304f8fe43d8dae9fa1e91eba54f25a22.slice. Apr 28 00:31:12.763694 kubelet[2481]: E0428 00:31:12.759552 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:31:13.249886 kubelet[2481]: E0428 00:31:13.248171 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:13.730776 kubelet[2481]: E0428 00:31:13.713491 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:31:16.425368 kubelet[2481]: E0428 00:31:16.267356 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:16.465172 containerd[1454]: time="2026-04-28T00:31:16.465025057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:304f8fe43d8dae9fa1e91eba54f25a22,Namespace:kube-system,Attempt:0,}" Apr 28 00:31:16.875444 kubelet[2481]: E0428 00:31:16.617243 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:18.087055 kubelet[2481]: E0428 00:31:16.808428 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:31:18.832272 kubelet[2481]: E0428 00:31:18.686540 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:31:18.844355 kubelet[2481]: E0428 00:31:18.834590 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:31:18.880289 kubelet[2481]: I0428 00:31:18.875504 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:20.652457 kubelet[2481]: E0428 00:31:20.651945 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:25.079785 kubelet[2481]: E0428 00:31:25.001562 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:31:26.836501 kubelet[2481]: E0428 00:31:26.835797 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:32.598605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553547036.mount: Deactivated successfully. Apr 28 00:31:32.796352 kubelet[2481]: E0428 00:31:28.773566 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:35.175050 containerd[1454]: time="2026-04-28T00:31:35.001270451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:31:35.251966 kubelet[2481]: E0428 00:31:35.250065 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:31:35.266778 containerd[1454]: time="2026-04-28T00:31:35.266529682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 00:31:35.266974 kubelet[2481]: I0428 00:31:35.266910 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:35.273851 containerd[1454]: time="2026-04-28T00:31:35.273018761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:31:35.299452 containerd[1454]: time="2026-04-28T00:31:35.292141996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:31:36.349571 kubelet[2481]: E0428 00:31:36.275767 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:36.446940 kubelet[2481]: E0428 00:31:36.444009 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:31:36.504315 containerd[1454]: time="2026-04-28T00:31:36.476693672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:31:37.266324 kubelet[2481]: E0428 00:31:37.251065 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:31:37.510409 kubelet[2481]: E0428 00:31:37.369416 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:42.607526 containerd[1454]: time="2026-04-28T00:31:42.604570850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:31:43.066379 kubelet[2481]: E0428 00:31:43.066070 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:31:43.263282 kubelet[2481]: E0428 00:31:43.262890 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:46.195190 containerd[1454]: time="2026-04-28T00:31:46.174056631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 29.702442846s" Apr 28 00:31:46.387057 kubelet[2481]: I0428 00:31:46.273566 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:46.396702 kubelet[2481]: E0428 00:31:46.389775 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:46.401224 containerd[1454]: time="2026-04-28T00:31:46.388759770Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:31:46.401896 containerd[1454]: time="2026-04-28T00:31:46.401735695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 37.038265402s" Apr 28 00:31:46.402679 containerd[1454]: time="2026-04-28T00:31:46.402526044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 35.609289451s" Apr 28 00:31:46.457213 containerd[1454]: time="2026-04-28T00:31:46.456291669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:31:48.859860 kubelet[2481]: E0428 00:31:48.834529 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:52.598407 kubelet[2481]: E0428 00:31:52.598326 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:31:53.390061 kubelet[2481]: E0428 00:31:53.376397 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:31:53.706303 kubelet[2481]: E0428 00:31:53.706081 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:54.822039 kubelet[2481]: I0428 00:31:54.811521 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:55.541781 kubelet[2481]: E0428 00:31:55.468469 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:31:57.288018 containerd[1454]: time="2026-04-28T00:31:57.276043495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:31:57.451581 containerd[1454]: time="2026-04-28T00:31:57.294256071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:31:57.451581 containerd[1454]: time="2026-04-28T00:31:57.294996285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:31:57.556965 containerd[1454]: time="2026-04-28T00:31:57.536352730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:31:59.337690 kubelet[2481]: E0428 00:31:59.336326 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:59.659970 containerd[1454]: time="2026-04-28T00:31:59.297602120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:31:59.789531 containerd[1454]: time="2026-04-28T00:31:59.758457352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:32:00.145967 containerd[1454]: time="2026-04-28T00:32:00.091198015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:32:00.324966 containerd[1454]: time="2026-04-28T00:32:00.272630247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:32:00.577101 kubelet[2481]: E0428 00:32:00.574909 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:32:00.749334 containerd[1454]: time="2026-04-28T00:32:00.747737358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:32:00.752571 containerd[1454]: time="2026-04-28T00:32:00.749514466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:32:00.752571 containerd[1454]: time="2026-04-28T00:32:00.749532835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:32:00.757886 containerd[1454]: time="2026-04-28T00:32:00.756937347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:32:03.297319 kubelet[2481]: E0428 00:32:03.291224 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:32:03.948034 kubelet[2481]: E0428 00:32:03.947213 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:03.948834 systemd[1]: run-containerd-runc-k8s.io-112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154-runc.Lb5yBV.mount: Deactivated successfully. Apr 28 00:32:04.097585 kubelet[2481]: I0428 00:32:04.065597 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:04.207034 kubelet[2481]: E0428 00:32:04.198087 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:32:04.669964 kubelet[2481]: E0428 00:32:04.654413 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:32:05.308428 systemd[1]: Started cri-containerd-112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154.scope - libcontainer container 112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154. Apr 28 00:32:05.387626 systemd[1]: Started cri-containerd-381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a.scope - libcontainer container 381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a. Apr 28 00:32:07.149812 systemd[1]: run-containerd-runc-k8s.io-570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998-runc.kTFgsV.mount: Deactivated successfully. Apr 28 00:32:07.629131 kubelet[2481]: E0428 00:32:07.606913 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:32:08.448071 systemd[1]: Started cri-containerd-570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998.scope - libcontainer container 570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998. Apr 28 00:32:09.649399 kubelet[2481]: E0428 00:32:09.646119 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:10.033588 kubelet[2481]: E0428 00:32:10.033303 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:32:10.648281 containerd[1454]: time="2026-04-28T00:32:10.647924022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ae88e85786a13701eebaf6993fb55ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a\"" Apr 28 00:32:10.649284 containerd[1454]: time="2026-04-28T00:32:10.648396572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:304f8fe43d8dae9fa1e91eba54f25a22,Namespace:kube-system,Attempt:0,} returns sandbox id \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\"" Apr 28 00:32:11.042504 kubelet[2481]: E0428 00:32:11.042213 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:11.081433 kubelet[2481]: E0428 00:32:11.042304 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:11.083442 containerd[1454]: time="2026-04-28T00:32:11.073835867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:661aacf61b27dbeb7414ee44841cd3ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998\"" Apr 28 00:32:11.094637 kubelet[2481]: E0428 00:32:11.093637 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:11.262446 containerd[1454]: time="2026-04-28T00:32:11.262058783Z" level=info msg="CreateContainer within sandbox \"381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 00:32:11.393817 containerd[1454]: time="2026-04-28T00:32:11.316341398Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 00:32:11.523303 containerd[1454]: time="2026-04-28T00:32:11.523087451Z" level=info msg="CreateContainer within sandbox \"570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 00:32:11.579245 containerd[1454]: time="2026-04-28T00:32:11.578793015Z" level=info msg="CreateContainer within sandbox \"381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2\"" Apr 28 00:32:11.795086 containerd[1454]: time="2026-04-28T00:32:11.791789626Z" level=info msg="StartContainer for \"19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2\"" Apr 28 00:32:12.100044 containerd[1454]: time="2026-04-28T00:32:12.068950372Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\"" Apr 28 00:32:11.896635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869155734.mount: Deactivated successfully. Apr 28 00:32:12.203132 kubelet[2481]: I0428 00:32:12.200607 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:12.268610 containerd[1454]: time="2026-04-28T00:32:12.201125286Z" level=info msg="CreateContainer within sandbox \"570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1\"" Apr 28 00:32:12.268610 containerd[1454]: time="2026-04-28T00:32:12.201443625Z" level=info msg="StartContainer for \"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\"" Apr 28 00:32:12.251590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705267249.mount: Deactivated successfully. Apr 28 00:32:12.406388 containerd[1454]: time="2026-04-28T00:32:12.377310819Z" level=info msg="StartContainer for \"442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1\"" Apr 28 00:32:12.463624 kubelet[2481]: E0428 00:32:12.385126 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:32:14.140093 kubelet[2481]: E0428 00:32:14.139506 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:14.759081 systemd[1]: run-containerd-runc-k8s.io-19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2-runc.231LE5.mount: Deactivated successfully. Apr 28 00:32:14.948426 kubelet[2481]: E0428 00:32:14.948145 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:32:16.645729 systemd[1]: Started cri-containerd-19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2.scope - libcontainer container 19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2. Apr 28 00:32:16.809569 kubelet[2481]: E0428 00:32:16.808878 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:32:17.382610 systemd[1]: Started cri-containerd-442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1.scope - libcontainer container 442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1. Apr 28 00:32:17.454863 systemd[1]: Started cri-containerd-6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952.scope - libcontainer container 6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952. Apr 28 00:32:18.057716 containerd[1454]: time="2026-04-28T00:32:18.057080637Z" level=info msg="StartContainer for \"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\" returns successfully" Apr 28 00:32:18.197392 containerd[1454]: time="2026-04-28T00:32:18.164726654Z" level=info msg="StartContainer for \"19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2\" returns successfully" Apr 28 00:32:18.454243 containerd[1454]: time="2026-04-28T00:32:18.452980389Z" level=info msg="StartContainer for \"442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1\" returns successfully" Apr 28 00:32:18.805075 kubelet[2481]: E0428 00:32:18.804756 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:18.819856 kubelet[2481]: E0428 00:32:18.819792 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:18.833194 kubelet[2481]: E0428 00:32:18.820329 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:18.833429 kubelet[2481]: E0428 00:32:18.833418 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:18.877755 kubelet[2481]: E0428 00:32:18.877520 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:18.878475 kubelet[2481]: E0428 00:32:18.877980 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:19.656638 kubelet[2481]: E0428 00:32:19.653470 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:19.663125 kubelet[2481]: I0428 00:32:19.653456 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:19.695442 kubelet[2481]: E0428 00:32:19.692353 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:32:20.604569 kubelet[2481]: E0428 00:32:20.604133 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:20.615392 kubelet[2481]: E0428 00:32:20.604836 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:20.615392 kubelet[2481]: E0428 00:32:20.606299 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:20.615392 kubelet[2481]: E0428 00:32:20.606509 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:20.615392 kubelet[2481]: E0428 00:32:20.602625 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:20.615392 kubelet[2481]: E0428 00:32:20.606871 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:24.294685 kubelet[2481]: E0428 00:32:24.294062 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:24.367622 kubelet[2481]: E0428 00:32:24.352231 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:24.395855 kubelet[2481]: E0428 00:32:24.304585 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:24.456763 kubelet[2481]: E0428 00:32:24.454932 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:26.962337 kubelet[2481]: E0428 00:32:26.950507 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:27.023753 kubelet[2481]: E0428 00:32:27.022333 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:27.039868 kubelet[2481]: E0428 00:32:27.031160 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:27.465049 kubelet[2481]: E0428 00:32:27.461119 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:28.375385 kubelet[2481]: I0428 00:32:28.369432 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:29.846382 kubelet[2481]: E0428 00:32:29.846096 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:32.164497 kubelet[2481]: E0428 00:32:32.095344 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:32.346396 kubelet[2481]: E0428 00:32:32.260762 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:32:32.346396 kubelet[2481]: E0428 00:32:32.262173 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:34.363394 kubelet[2481]: E0428 00:32:34.361338 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:35.417396 kubelet[2481]: E0428 00:32:35.416146 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:35.565022 kubelet[2481]: E0428 00:32:35.563761 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:36.017185 kubelet[2481]: E0428 00:32:36.015816 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:32:39.442036 kubelet[2481]: E0428 00:32:39.430080 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:32:39.916611 kubelet[2481]: E0428 00:32:39.909478 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:45.491618 kubelet[2481]: E0428 00:32:45.459485 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:32:46.694781 kubelet[2481]: I0428 00:32:46.681954 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:49.505105 kubelet[2481]: E0428 00:32:49.499636 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:32:50.190707 kubelet[2481]: E0428 00:32:50.157624 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:56.447445 kubelet[2481]: E0428 00:32:55.792978 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:57.181222 kubelet[2481]: E0428 00:32:56.643548 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de08bfaa655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,LastTimestamp:2026-04-28 00:30:21.953599061 +0000 UTC m=+9.697134894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:57.505851 kubelet[2481]: E0428 00:32:57.474882 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:32:58.531327 kubelet[2481]: E0428 00:32:58.530508 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:33:02.388021 kubelet[2481]: E0428 00:33:02.170507 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:04.968683 kubelet[2481]: E0428 00:33:04.846817 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:33:06.265490 kubelet[2481]: I0428 00:33:06.265209 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:33:07.527357 kubelet[2481]: E0428 00:33:07.154825 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:33:11.373640 kubelet[2481]: E0428 00:33:10.874880 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:33:12.404634 kubelet[2481]: E0428 00:33:12.396605 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:33:13.141397 kubelet[2481]: E0428 00:33:13.066255 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:13.239232 kubelet[2481]: E0428 00:33:13.149096 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:19.729954 kubelet[2481]: E0428 00:33:19.728531 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:33:20.830449 kubelet[2481]: E0428 00:33:20.595563 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:33:24.454715 kubelet[2481]: E0428 00:33:24.450622 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:33:24.763584 kubelet[2481]: E0428 00:33:24.762347 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:33:25.086780 kubelet[2481]: E0428 00:33:24.763913 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:25.157151 kubelet[2481]: E0428 00:33:25.156948 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:33:37.757355 kubelet[2481]: E0428 00:33:37.753187 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:01.806140 kubelet[2481]: E0428 00:33:58.986652 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:34:05.663435 kubelet[2481]: E0428 00:34:00.793315 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:16.693363 kubelet[2481]: E0428 00:34:02.033377 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:34:19.401144 kubelet[2481]: E0428 00:34:19.376467 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:34:21.002714 kubelet[2481]: E0428 00:34:20.498359 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:34:23.502147 kubelet[2481]: E0428 00:34:21.260497 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:34:27.170053 kubelet[2481]: E0428 00:34:27.152615 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:34:28.162593 kubelet[2481]: E0428 00:34:28.162013 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:34:28.164123 kubelet[2481]: E0428 00:34:28.163489 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:34:28.164123 kubelet[2481]: E0428 00:34:28.163629 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:28.164331 kubelet[2481]: I0428 00:34:28.164300 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:34:28.660242 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 28 00:34:47.808568 systemd-tmpfiles[2792]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:34:48.161337 systemd-tmpfiles[2792]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:34:48.219511 kubelet[2481]: E0428 00:34:48.162750 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:34:48.219511 kubelet[2481]: E0428 00:34:45.264550 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:34:48.163505 systemd-tmpfiles[2792]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:34:48.172249 systemd-tmpfiles[2792]: ACLs are not supported, ignoring. Apr 28 00:34:48.172360 systemd-tmpfiles[2792]: ACLs are not supported, ignoring. Apr 28 00:34:49.034419 systemd-tmpfiles[2792]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:34:49.034478 systemd-tmpfiles[2792]: Skipping /boot Apr 28 00:34:49.311809 kubelet[2481]: E0428 00:34:49.296603 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:34:49.312246 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 28 00:34:49.365349 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 28 00:34:49.417337 systemd[1]: systemd-tmpfiles-clean.service: Consumed 6.384s CPU time. Apr 28 00:34:49.456602 kubelet[2481]: E0428 00:34:49.454596 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:34:50.291148 kubelet[2481]: E0428 00:34:50.285263 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:50.338525 kubelet[2481]: E0428 00:34:50.338291 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:50.338525 kubelet[2481]: E0428 00:34:50.338486 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:34:50.338525 kubelet[2481]: E0428 00:34:50.339628 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:50.524891 kubelet[2481]: E0428 00:34:49.434141 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:34:51.098371 kubelet[2481]: E0428 00:34:51.098091 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:34:51.227192 kubelet[2481]: E0428 00:34:51.109380 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:51.875801 kubelet[2481]: E0428 00:34:51.873979 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:34:56.738442 kubelet[2481]: E0428 00:34:56.729185 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:35:01.331342 kubelet[2481]: I0428 00:35:01.270640 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:35:12.982116 kubelet[2481]: E0428 00:35:10.149624 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:35:19.689807 kubelet[2481]: E0428 00:35:17.537957 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:35:25.385425 kubelet[2481]: E0428 00:35:25.262564 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:35:37.758433 kubelet[2481]: E0428 00:35:37.741284 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:35:46.571489 kubelet[2481]: E0428 00:35:46.570433 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:35:49.524580 kubelet[2481]: E0428 00:35:49.485587 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:02.104354 kubelet[2481]: E0428 00:36:02.104196 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:36:03.501309 kubelet[2481]: E0428 00:36:02.104333 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:36:07.290622 kubelet[2481]: E0428 00:36:02.104481 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:13.798355 kubelet[2481]: E0428 00:36:09.935289 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:36:40.454161 kubelet[2481]: E0428 00:36:20.960010 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:36:42.276490 kubelet[2481]: E0428 00:36:42.264171 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:47.394920 kubelet[2481]: E0428 00:36:47.338646 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:52.093073 kubelet[2481]: E0428 00:36:47.883987 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:54.860425 kubelet[2481]: E0428 00:36:54.855092 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:59.103468 kubelet[2481]: E0428 00:36:59.097201 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:10.764600 kubelet[2481]: E0428 00:37:04.477649 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:10.764600 kubelet[2481]: E0428 00:37:01.390402 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:37:12.379548 kubelet[2481]: E0428 00:37:04.888253 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:15.138600 kubelet[2481]: E0428 00:37:07.062529 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:22.774476 kubelet[2481]: E0428 00:37:22.749150 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:24.307326 kubelet[2481]: E0428 00:37:12.794497 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:37:31.435185 kubelet[2481]: E0428 00:37:31.434912 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:31.496777 systemd[1]: cri-containerd-6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952.scope: Deactivated successfully. Apr 28 00:37:31.500487 systemd[1]: cri-containerd-6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952.scope: Consumed 4min 3.640s CPU time. Apr 28 00:37:37.908410 kubelet[2481]: E0428 00:37:37.864345 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:44.536520 kubelet[2481]: E0428 00:37:44.529288 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:44.536520 kubelet[2481]: E0428 00:37:44.529453 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:45.263774 kubelet[2481]: E0428 00:37:45.262592 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:37:45.375537 kubelet[2481]: E0428 00:37:45.361762 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: write tcp 10.0.0.14:57620->10.0.0.14:6443: write: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:37:45.431460 kubelet[2481]: E0428 00:37:45.014595 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:45.454714 kubelet[2481]: I0428 00:37:45.453610 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:37:45.458046 kubelet[2481]: E0428 00:37:45.456880 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:45.458046 kubelet[2481]: E0428 00:37:45.457008 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:37:45.458046 kubelet[2481]: E0428 00:37:45.457210 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:45.458046 kubelet[2481]: E0428 00:37:44.808391 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:37:45.458046 kubelet[2481]: E0428 00:37:45.457265 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": write tcp 10.0.0.14:57608->10.0.0.14:6443: write: connection reset by peer" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:45.458284 kubelet[2481]: E0428 00:37:45.457565 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: dial tcp 10.0.0.14:6443: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:46.479802 kubelet[2481]: E0428 00:37:46.394650 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:37:46.841187 kubelet[2481]: E0428 00:37:46.702854 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: dial tcp 10.0.0.14:6443: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:47.099381 containerd[1454]: time="2026-04-28T00:37:47.093447434Z" level=error msg="failed to handle container TaskExit event container_id:\"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\" id:\"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\" pid:2728 exit_status:1 exited_at:{seconds:1777336656 nanos:791359540}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:37:47.178452 kubelet[2481]: E0428 00:37:47.175210 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:37:47.201137 kubelet[2481]: E0428 00:37:47.178220 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:47.229207 kubelet[2481]: E0428 00:37:47.227300 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:37:47.229207 kubelet[2481]: E0428 00:37:47.227876 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:48.266465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952-rootfs.mount: Deactivated successfully. Apr 28 00:37:48.299214 containerd[1454]: time="2026-04-28T00:37:48.283050242Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 28 00:37:48.935357 containerd[1454]: time="2026-04-28T00:37:48.935212802Z" level=info msg="TaskExit event container_id:\"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\" id:\"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\" pid:2728 exit_status:1 exited_at:{seconds:1777336656 nanos:791359540}" Apr 28 00:37:51.096147 containerd[1454]: time="2026-04-28T00:37:51.000253098Z" level=info msg="shim disconnected" id=6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952 namespace=k8s.io Apr 28 00:37:51.151846 containerd[1454]: time="2026-04-28T00:37:51.116574475Z" level=warning msg="cleaning up after shim disconnected" id=6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952 namespace=k8s.io Apr 28 00:37:51.151846 containerd[1454]: time="2026-04-28T00:37:51.122506308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:37:51.279734 kubelet[2481]: E0428 00:37:51.275458 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:51.661790 kubelet[2481]: E0428 00:37:51.659773 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:53.278145 kubelet[2481]: E0428 00:37:52.256489 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:54.736246 kubelet[2481]: E0428 00:37:54.730761 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:37:55.443648 kubelet[2481]: E0428 00:37:54.874851 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:56.246394 containerd[1454]: time="2026-04-28T00:37:56.239822321Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952 delete" error="signal: killed" namespace=k8s.io Apr 28 00:37:56.852823 containerd[1454]: time="2026-04-28T00:37:56.342702872Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952 namespace=k8s.io Apr 28 00:37:57.423238 kubelet[2481]: E0428 00:37:57.352400 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:57.423238 kubelet[2481]: E0428 00:37:57.352589 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:57.499379 kubelet[2481]: E0428 00:37:57.498921 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:57.499379 kubelet[2481]: E0428 00:37:57.499064 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:37:57.504226 kubelet[2481]: I0428 00:37:57.500505 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:37:57.788083 containerd[1454]: time="2026-04-28T00:37:57.503548694Z" level=info msg="Ensure that container 6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952 in task-service has been cleanup successfully" Apr 28 00:37:59.428584 kubelet[2481]: E0428 00:37:59.423617 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:38:03.143786 kubelet[2481]: E0428 00:38:03.142767 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:38:03.143786 kubelet[2481]: E0428 00:38:03.143689 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:38:03.152515 kubelet[2481]: E0428 00:38:03.151465 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:38:03.155179 kubelet[2481]: I0428 00:38:03.155098 2481 scope.go:117] "RemoveContainer" containerID="6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952" Apr 28 00:38:03.155763 kubelet[2481]: E0428 00:38:03.155424 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:03.562297 containerd[1454]: time="2026-04-28T00:38:03.562085558Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Apr 28 00:38:05.749305 kubelet[2481]: E0428 00:38:05.741977 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:07.295882 kubelet[2481]: E0428 00:38:07.295431 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:38:08.485077 containerd[1454]: time="2026-04-28T00:38:08.482639218Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\"" Apr 28 00:38:09.823434 containerd[1454]: time="2026-04-28T00:38:09.815404868Z" level=info msg="StartContainer for \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\"" Apr 28 00:38:11.388082 kubelet[2481]: E0428 00:38:09.172918 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:12.735893 kubelet[2481]: E0428 00:38:12.735249 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:38:14.394791 kubelet[2481]: E0428 00:38:14.386394 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:38:15.057768 kubelet[2481]: E0428 00:38:15.050188 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:38:18.654257 kubelet[2481]: I0428 00:38:17.876599 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:38:21.586381 kubelet[2481]: E0428 00:38:21.546843 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:22.364384 kubelet[2481]: E0428 00:38:22.364234 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:38:22.815374 kubelet[2481]: E0428 00:38:22.809182 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:38:22.850359 kubelet[2481]: E0428 00:38:22.812311 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:38:25.248479 kubelet[2481]: E0428 00:38:22.655580 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:34.659459 kubelet[2481]: E0428 00:38:34.651477 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:38:37.288342 kubelet[2481]: E0428 00:38:36.154639 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:38:37.501360 kubelet[2481]: E0428 00:38:37.492143 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:55.439635 kubelet[2481]: I0428 00:38:55.425857 2481 request.go:752] "Waited before sending request" logger="kubernetes.io/kube-apiserver-client-kubelet" delay="1.621571023s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests" Apr 28 00:39:01.961941 kubelet[2481]: E0428 00:39:01.859742 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:07.200087 kubelet[2481]: E0428 00:39:07.197653 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:39:07.200087 kubelet[2481]: E0428 00:39:07.191932 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:39:08.027894 kubelet[2481]: E0428 00:39:08.025163 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:39:16.263654 systemd[1]: Started cri-containerd-46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6.scope - libcontainer container 46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6. Apr 28 00:39:17.203494 kubelet[2481]: E0428 00:39:16.455915 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:39:20.856028 kubelet[2481]: E0428 00:39:16.134693 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:22.029329 kubelet[2481]: I0428 00:39:18.756215 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:39:22.763909 kubelet[2481]: I0428 00:39:22.760077 2481 scope.go:117] "RemoveContainer" containerID="6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952" Apr 28 00:39:25.261609 kubelet[2481]: E0428 00:39:24.404441 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:34.783639 kubelet[2481]: E0428 00:39:34.764370 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:39:37.271861 kubelet[2481]: E0428 00:39:36.925546 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:39:39.686537 kubelet[2481]: E0428 00:39:39.111528 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:39:39.686537 kubelet[2481]: E0428 00:39:39.690894 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:40.253802 kubelet[2481]: E0428 00:39:40.251213 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:39:45.771202 kubelet[2481]: E0428 00:39:45.765366 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:39:47.286501 kubelet[2481]: E0428 00:39:46.306440 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:39:47.788992 kubelet[2481]: E0428 00:39:42.392638 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:53.044103 kubelet[2481]: E0428 00:39:53.044034 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:39:55.090909 kubelet[2481]: E0428 00:39:53.054953 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:58.261927 kubelet[2481]: E0428 00:39:57.359585 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:40:09.982525 containerd[1454]: time="2026-04-28T00:40:09.942946458Z" level=error msg="Failed to pipe stdout of container \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\"" error="read /proc/self/fd/37: file already closed" Apr 28 00:40:10.810085 containerd[1454]: time="2026-04-28T00:40:09.971226637Z" level=error msg="Failed to pipe stderr of container \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\"" error="read /proc/self/fd/39: file already closed" Apr 28 00:40:11.677204 containerd[1454]: time="2026-04-28T00:40:11.659777153Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:40:12.871695 kubelet[2481]: E0428 00:40:10.257943 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:40:14.207799 kubelet[2481]: E0428 00:40:12.950022 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:17.344295 kubelet[2481]: E0428 00:40:12.633199 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:40:17.629611 kubelet[2481]: E0428 00:40:15.161955 2481 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" Apr 28 00:40:17.751431 kubelet[2481]: E0428 00:40:16.128925 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:40:19.795654 containerd[1454]: time="2026-04-28T00:40:19.735530206Z" level=error msg="get state for 46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" error="context deadline exceeded: unknown" Apr 28 00:40:20.276024 containerd[1454]: time="2026-04-28T00:40:19.952680878Z" level=warning msg="unknown status" status=0 Apr 28 00:40:21.773614 systemd[1]: cri-containerd-46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6.scope: Deactivated successfully. Apr 28 00:40:22.129840 systemd[1]: cri-containerd-46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6.scope: Consumed 9.348s CPU time. Apr 28 00:40:23.654953 containerd[1454]: time="2026-04-28T00:40:23.559430790Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Apr 28 00:40:24.162905 kubelet[2481]: E0428 00:40:23.787752 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:40:29.430649 kubelet[2481]: E0428 00:40:28.439214 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:40:30.484143 containerd[1454]: time="2026-04-28T00:40:29.568102403Z" level=info msg="RemoveContainer for \"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\"" Apr 28 00:40:31.569615 kubelet[2481]: E0428 00:40:20.081828 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:34.192282 kubelet[2481]: E0428 00:40:29.090291 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:40:36.459609 containerd[1454]: time="2026-04-28T00:40:36.421649260Z" level=info msg="RemoveContainer for \"6f3206f8508a4e6516e40336a619e2d482bc28fb9ed73ac17f7ce0a2ba44e952\" returns successfully" Apr 28 00:40:37.076768 kubelet[2481]: E0428 00:40:37.040372 2481 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.14 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.12:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 28 00:40:38.568295 kubelet[2481]: E0428 00:40:33.189527 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de0b85fed84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,LastTimestamp:2026-04-28 00:30:22.698433924 +0000 UTC m=+10.441969758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:45.005244 kubelet[2481]: E0428 00:40:38.270187 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:40:46.463454 kubelet[2481]: I0428 00:40:46.377254 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:40:48.264949 kubelet[2481]: E0428 00:40:46.356474 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:40:49.791599 kubelet[2481]: E0428 00:40:48.590612 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:40:56.401714 kubelet[2481]: E0428 00:40:56.401359 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:40:59.269022 kubelet[2481]: E0428 00:40:49.676813 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:00.299426 kubelet[2481]: E0428 00:41:00.298688 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:41:00.303390 kubelet[2481]: E0428 00:41:00.299618 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:41:00.373876 kubelet[2481]: E0428 00:41:00.364865 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:10.415001 containerd[1454]: time="2026-04-28T00:41:10.170494367Z" level=error msg="Failed to delete containerd task \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\"" error="failed to delete task: context deadline exceeded: unknown" Apr 28 00:41:11.334039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6-rootfs.mount: Deactivated successfully. Apr 28 00:41:12.254181 containerd[1454]: time="2026-04-28T00:41:11.803532874Z" level=error msg="StartContainer for \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\" failed" error="failed to start containerd task \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\": context deadline exceeded: unknown" Apr 28 00:41:12.599478 containerd[1454]: time="2026-04-28T00:41:12.086020067Z" level=error msg="ttrpc: received message on inactive stream" stream=23 Apr 28 00:41:12.656798 kubelet[2481]: E0428 00:41:12.655041 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:41:12.664521 kubelet[2481]: E0428 00:41:12.659096 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:41:12.664521 kubelet[2481]: E0428 00:41:12.660113 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:41:12.664521 kubelet[2481]: E0428 00:41:12.661338 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:41:16.460554 kubelet[2481]: E0428 00:41:15.486447 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:17.805004 kubelet[2481]: E0428 00:41:15.957966 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:41:18.800646 kubelet[2481]: E0428 00:41:18.671256 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:41:26.808524 kubelet[2481]: E0428 00:41:24.957354 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:41:31.033802 kubelet[2481]: E0428 00:41:31.026801 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:41:31.895570 containerd[1454]: time="2026-04-28T00:41:31.779944608Z" level=error msg="get state for 46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" error="context deadline exceeded: unknown" Apr 28 00:41:32.989290 containerd[1454]: time="2026-04-28T00:41:31.780697279Z" level=warning msg="unknown status" status=0 Apr 28 00:41:32.989290 containerd[1454]: time="2026-04-28T00:41:32.183530863Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 28 00:41:41.644939 kubelet[2481]: E0428 00:41:38.262318 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:41:54.410597 kubelet[2481]: E0428 00:41:52.510077 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:56.107636 kubelet[2481]: E0428 00:41:56.099646 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:41:57.266858 kubelet[2481]: E0428 00:41:57.258094 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:41:59.784441 containerd[1454]: time="2026-04-28T00:41:59.660176589Z" level=error msg="get state for 46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" error="context deadline exceeded: unknown" Apr 28 00:42:00.548948 kubelet[2481]: E0428 00:41:59.379600 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:02.066415 containerd[1454]: time="2026-04-28T00:42:01.778364331Z" level=warning msg="unknown status" status=0 Apr 28 00:42:02.066415 containerd[1454]: time="2026-04-28T00:42:02.019609286Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 28 00:42:02.742017 kubelet[2481]: E0428 00:42:02.038066 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:03.814735 kubelet[2481]: E0428 00:42:02.737872 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:11.848052 kubelet[2481]: E0428 00:42:05.679457 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:42:16.608683 kubelet[2481]: E0428 00:42:16.584944 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:20.208096 kubelet[2481]: I0428 00:42:20.204384 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:42:22.012637 kubelet[2481]: E0428 00:42:15.468565 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:42:24.358620 kubelet[2481]: E0428 00:42:22.960073 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:42:26.154364 kubelet[2481]: E0428 00:42:24.199426 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:42:26.158580 kubelet[2481]: E0428 00:42:26.157536 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:42:32.149100 kubelet[2481]: E0428 00:42:32.144696 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:36.902001 kubelet[2481]: E0428 00:42:33.433200 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:39.564501 kubelet[2481]: E0428 00:42:38.322555 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:41.238750 kubelet[2481]: E0428 00:42:39.746634 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:42.586728 kubelet[2481]: E0428 00:42:39.147413 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:42:42.586728 kubelet[2481]: E0428 00:42:41.765364 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:42.586728 kubelet[2481]: E0428 00:42:42.157973 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:42:45.808446 containerd[1454]: time="2026-04-28T00:42:45.452544276Z" level=error msg="get state for 46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" error="context deadline exceeded: unknown" Apr 28 00:42:47.529540 containerd[1454]: time="2026-04-28T00:42:46.031115649Z" level=warning msg="unknown status" status=0 Apr 28 00:42:48.377411 containerd[1454]: time="2026-04-28T00:42:47.850862246Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 28 00:42:50.618174 kubelet[2481]: E0428 00:42:49.416000 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:42:54.463523 kubelet[2481]: E0428 00:42:54.460427 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:42:56.396887 kubelet[2481]: E0428 00:42:55.473057 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:59.359456 kubelet[2481]: E0428 00:42:54.560182 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:43:05.839788 kubelet[2481]: E0428 00:43:05.838758 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:43:05.842990 kubelet[2481]: E0428 00:43:05.842962 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:43:06.046429 kubelet[2481]: E0428 00:43:06.046309 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:43:06.048995 kubelet[2481]: E0428 00:43:06.048962 2481 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304f8fe43d8dae9fa1e91eba54f25a22.slice/cri-containerd-46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6.scope: container not created: not found Apr 28 00:43:06.049684 kubelet[2481]: I0428 00:43:06.049632 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:43:06.107171 kubelet[2481]: E0428 00:43:06.094602 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:43:09.605277 kubelet[2481]: E0428 00:43:09.604628 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:10.750273 kubelet[2481]: E0428 00:43:10.064237 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:11.477999 kubelet[2481]: E0428 00:43:11.302824 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:43:11.720043 kubelet[2481]: E0428 00:43:11.715487 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:43:12.164237 kubelet[2481]: I0428 00:43:11.847117 2481 scope.go:117] "RemoveContainer" containerID="46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" Apr 28 00:43:12.874591 kubelet[2481]: E0428 00:43:12.818601 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:17.061229 kubelet[2481]: E0428 00:43:17.060699 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:43:17.064574 kubelet[2481]: E0428 00:43:17.061598 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:43:17.064919 kubelet[2481]: E0428 00:43:17.061830 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:43:17.605599 containerd[1454]: time="2026-04-28T00:43:17.454215685Z" level=info msg="StopContainer for \"19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2\" with timeout 30 (s)" Apr 28 00:43:18.415642 kubelet[2481]: I0428 00:43:17.064566 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:43:19.427739 containerd[1454]: time="2026-04-28T00:43:19.401355444Z" level=info msg="Stop container \"19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2\" with signal terminated" Apr 28 00:43:20.432172 kubelet[2481]: E0428 00:43:20.431918 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:43:20.451634 kubelet[2481]: E0428 00:43:20.433237 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:20.451634 kubelet[2481]: E0428 00:43:20.433412 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:43:20.451634 kubelet[2481]: E0428 00:43:20.433646 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:43:20.974888 containerd[1454]: time="2026-04-28T00:43:20.433599628Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:2,}" Apr 28 00:43:24.196180 containerd[1454]: time="2026-04-28T00:43:24.156613848Z" level=error msg="get state for 112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154" error="context deadline exceeded: unknown" Apr 28 00:43:25.494348 containerd[1454]: time="2026-04-28T00:43:24.322948593Z" level=warning msg="unknown status" status=0 Apr 28 00:43:26.266156 containerd[1454]: time="2026-04-28T00:43:26.252636444Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Apr 28 00:43:46.410818 kubelet[2481]: E0428 00:43:46.335385 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:48.887075 containerd[1454]: time="2026-04-28T00:43:48.873310355Z" level=info msg="StopContainer for \"442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1\" with timeout 30 (s)" Apr 28 00:43:54.839519 containerd[1454]: time="2026-04-28T00:43:54.189308574Z" level=error msg="get state for 442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1" error="context deadline exceeded: unknown" Apr 28 00:43:54.893019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501188902.mount: Deactivated successfully. Apr 28 00:43:55.452129 containerd[1454]: time="2026-04-28T00:43:55.204355922Z" level=warning msg="unknown status" status=0 Apr 28 00:43:57.578559 containerd[1454]: time="2026-04-28T00:43:56.880415536Z" level=info msg="Stop container \"442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1\" with signal terminated" Apr 28 00:44:04.578202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824548677.mount: Deactivated successfully. Apr 28 00:44:07.872695 containerd[1454]: time="2026-04-28T00:44:07.299814171Z" level=error msg="ttrpc: received message on inactive stream" stream=15 Apr 28 00:44:09.611229 kubelet[2481]: E0428 00:44:09.592730 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:44:11.784807 containerd[1454]: time="2026-04-28T00:44:11.773199331Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:2,} returns container id \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\"" Apr 28 00:44:16.890922 kubelet[2481]: E0428 00:44:15.292917 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:44:23.702059 kubelet[2481]: E0428 00:44:10.951218 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:24.653568 kubelet[2481]: E0428 00:44:23.716431 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:44:25.866083 kubelet[2481]: E0428 00:44:25.730519 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:26.596489 systemd[1]: cri-containerd-19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2.scope: Deactivated successfully. Apr 28 00:44:26.598288 systemd[1]: cri-containerd-19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2.scope: Consumed 4min 3.175s CPU time. Apr 28 00:44:27.302167 containerd[1454]: time="2026-04-28T00:44:27.287800120Z" level=info msg="StartContainer for \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\"" Apr 28 00:44:29.659235 kubelet[2481]: E0428 00:44:29.651232 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:44:33.934556 kubelet[2481]: I0428 00:44:33.198461 2481 scope.go:117] "RemoveContainer" containerID="46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6" Apr 28 00:44:35.341717 kubelet[2481]: E0428 00:44:35.165640 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:44:35.919142 kubelet[2481]: E0428 00:44:35.908434 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:44:36.456370 kubelet[2481]: E0428 00:44:36.456055 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:44:36.697600 systemd[1]: cri-containerd-442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1.scope: Deactivated successfully. Apr 28 00:44:36.936438 systemd[1]: cri-containerd-442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1.scope: Consumed 2min 46.324s CPU time. Apr 28 00:44:37.224953 kubelet[2481]: I0428 00:44:36.456941 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:44:37.836273 kubelet[2481]: E0428 00:44:37.375215 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:44:38.263374 kubelet[2481]: E0428 00:44:37.835353 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:39.762355 kubelet[2481]: E0428 00:44:37.833137 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:40.811917 kubelet[2481]: E0428 00:44:39.507980 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:44:41.750263 kubelet[2481]: E0428 00:44:41.744445 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:44:45.228745 kubelet[2481]: E0428 00:44:45.228647 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:44:45.242428 containerd[1454]: time="2026-04-28T00:44:45.241992978Z" level=info msg="RemoveContainer for \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\"" Apr 28 00:44:46.066218 containerd[1454]: time="2026-04-28T00:44:46.060382149Z" level=info msg="RemoveContainer for \"46d18b76487cedfa7716fa72e69ebc86562f9b257fb3de22af511f76c358b4e6\" returns successfully" Apr 28 00:44:49.806109 kubelet[2481]: E0428 00:44:49.801908 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:49.850702 kubelet[2481]: I0428 00:44:49.849589 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:44:49.909079 kubelet[2481]: E0428 00:44:49.908054 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:44:50.238493 containerd[1454]: time="2026-04-28T00:44:50.237924073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:44:50.238493 containerd[1454]: time="2026-04-28T00:44:50.238435356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:44:50.265012 containerd[1454]: time="2026-04-28T00:44:50.238451623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:44:50.265012 containerd[1454]: time="2026-04-28T00:44:50.246990537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:44:51.562652 kubelet[2481]: E0428 00:44:51.276332 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:52.248950 kubelet[2481]: E0428 00:44:52.247228 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:44:52.254747 kubelet[2481]: E0428 00:44:52.254607 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:44:52.647152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2-rootfs.mount: Deactivated successfully. Apr 28 00:44:52.960812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1-rootfs.mount: Deactivated successfully. Apr 28 00:44:53.348969 containerd[1454]: time="2026-04-28T00:44:53.348124191Z" level=info msg="shim disconnected" id=442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1 namespace=k8s.io Apr 28 00:44:53.351043 containerd[1454]: time="2026-04-28T00:44:53.349563858Z" level=warning msg="cleaning up after shim disconnected" id=442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1 namespace=k8s.io Apr 28 00:44:53.351043 containerd[1454]: time="2026-04-28T00:44:53.349584638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:44:53.351043 containerd[1454]: time="2026-04-28T00:44:53.349878514Z" level=info msg="shim disconnected" id=19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2 namespace=k8s.io Apr 28 00:44:53.351043 containerd[1454]: time="2026-04-28T00:44:53.350101581Z" level=warning msg="cleaning up after shim disconnected" id=19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2 namespace=k8s.io Apr 28 00:44:53.351043 containerd[1454]: time="2026-04-28T00:44:53.350112357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:44:53.394313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88-rootfs.mount: Deactivated successfully. Apr 28 00:44:53.564938 containerd[1454]: time="2026-04-28T00:44:53.564382866Z" level=info msg="shim disconnected" id=6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88 namespace=k8s.io Apr 28 00:44:53.564938 containerd[1454]: time="2026-04-28T00:44:53.564830401Z" level=warning msg="cleaning up after shim disconnected" id=6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88 namespace=k8s.io Apr 28 00:44:53.564938 containerd[1454]: time="2026-04-28T00:44:53.564839634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:44:53.764342 containerd[1454]: time="2026-04-28T00:44:53.763231162Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:44:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:44:53.779152 containerd[1454]: time="2026-04-28T00:44:53.778735885Z" level=info msg="StopContainer for \"442a2de5357c90f1e4e0f1c5a89cd3393967e1d87efb903539ccb0847e4af4b1\" returns successfully" Apr 28 00:44:53.836328 kubelet[2481]: E0428 00:44:53.819035 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:53.846074 containerd[1454]: time="2026-04-28T00:44:53.845967398Z" level=info msg="StopContainer for \"19c94077115e4d0f64aa835525fea157fc86bc3e1318993bb5d813ef36372bc2\" returns successfully" Apr 28 00:44:54.112039 kubelet[2481]: E0428 00:44:54.076003 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:54.248464 containerd[1454]: time="2026-04-28T00:44:54.248245979Z" level=info msg="CreateContainer within sandbox \"570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 00:44:54.249336 containerd[1454]: time="2026-04-28T00:44:54.249037368Z" level=info msg="CreateContainer within sandbox \"381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 28 00:44:54.475442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644823877.mount: Deactivated successfully. Apr 28 00:44:54.833045 containerd[1454]: time="2026-04-28T00:44:54.818814432Z" level=info msg="CreateContainer within sandbox \"570be704db4e004b33e4730f50c45dcc9fe1e5bbd3be9e84df976ea4bdeb7998\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0885d2330882632655e364777c51fdd3de5b5c68e731e8a43ba72830be001384\"" Apr 28 00:44:55.043267 containerd[1454]: time="2026-04-28T00:44:55.042744337Z" level=info msg="StartContainer for \"0885d2330882632655e364777c51fdd3de5b5c68e731e8a43ba72830be001384\"" Apr 28 00:44:55.226350 containerd[1454]: time="2026-04-28T00:44:55.226097348Z" level=info msg="CreateContainer within sandbox \"381230b87d737fec5a46eb0c7cde82d7080ef053bab0e008dee2b81d06220e4a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7c03c0e72bfe53da15d27665803af53f42ca496c46b116412265f07416c2e9d9\"" Apr 28 00:44:55.247736 containerd[1454]: time="2026-04-28T00:44:55.244869665Z" level=info msg="StartContainer for \"7c03c0e72bfe53da15d27665803af53f42ca496c46b116412265f07416c2e9d9\"" Apr 28 00:44:56.474020 containerd[1454]: time="2026-04-28T00:44:56.472823455Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:44:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2026-04-28T00:44:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:44:56.496407 containerd[1454]: time="2026-04-28T00:44:56.495002021Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" namespace=k8s.io Apr 28 00:44:56.552248 containerd[1454]: time="2026-04-28T00:44:56.516258150Z" level=error msg="Failed to pipe stderr of container \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\"" error="reading from a closed fifo" Apr 28 00:44:56.604232 containerd[1454]: time="2026-04-28T00:44:56.572485753Z" level=error msg="Failed to pipe stdout of container \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\"" error="reading from a closed fifo" Apr 28 00:44:56.744820 containerd[1454]: time="2026-04-28T00:44:56.739616431Z" level=error msg="StartContainer for \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/0/ns/ipc: no such file or directory: unknown" Apr 28 00:44:56.745398 kubelet[2481]: E0428 00:44:56.744993 2481 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/0/ns/ipc: no such file or directory: unknown" containerID="6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88" Apr 28 00:44:56.752987 kubelet[2481]: E0428 00:44:56.752058 2481 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.14 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.12:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/0/ns/ipc: no such file or directory: unknown" logger="UnhandledError" Apr 28 00:44:56.769128 kubelet[2481]: E0428 00:44:56.764396 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/0/ns/ipc: no such file or directory: unknown\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:44:57.378941 kubelet[2481]: I0428 00:44:57.376329 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:44:57.577243 kubelet[2481]: E0428 00:44:57.568099 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:44:59.697183 systemd[1]: run-containerd-runc-k8s.io-0885d2330882632655e364777c51fdd3de5b5c68e731e8a43ba72830be001384-runc.bZtyfn.mount: Deactivated successfully. Apr 28 00:44:59.701138 kubelet[2481]: E0428 00:44:59.700960 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:44:59.720425 kubelet[2481]: E0428 00:44:59.720110 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:44:59.724969 kubelet[2481]: I0428 00:44:59.724142 2481 scope.go:117] "RemoveContainer" containerID="6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88" Apr 28 00:44:59.729856 kubelet[2481]: E0428 00:44:59.728122 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:59.729856 kubelet[2481]: E0428 00:44:59.729642 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:44:59.741116 systemd[1]: Started cri-containerd-7c03c0e72bfe53da15d27665803af53f42ca496c46b116412265f07416c2e9d9.scope - libcontainer container 7c03c0e72bfe53da15d27665803af53f42ca496c46b116412265f07416c2e9d9. Apr 28 00:44:59.744900 systemd[1]: Started cri-containerd-0885d2330882632655e364777c51fdd3de5b5c68e731e8a43ba72830be001384.scope - libcontainer container 0885d2330882632655e364777c51fdd3de5b5c68e731e8a43ba72830be001384. Apr 28 00:44:59.886975 kubelet[2481]: E0428 00:44:59.817195 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:00.072673 containerd[1454]: time="2026-04-28T00:45:00.072502895Z" level=info msg="StartContainer for \"7c03c0e72bfe53da15d27665803af53f42ca496c46b116412265f07416c2e9d9\" returns successfully" Apr 28 00:45:00.177426 containerd[1454]: time="2026-04-28T00:45:00.174105713Z" level=info msg="StartContainer for \"0885d2330882632655e364777c51fdd3de5b5c68e731e8a43ba72830be001384\" returns successfully" Apr 28 00:45:02.840080 kubelet[2481]: E0428 00:45:02.698084 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:45:03.370147 kubelet[2481]: E0428 00:45:03.281647 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:45:03.638270 kubelet[2481]: E0428 00:45:03.594780 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:45:13.163461 kubelet[2481]: E0428 00:45:13.162546 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:14.736172 kubelet[2481]: E0428 00:45:13.593100 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:45:19.748989 kubelet[2481]: E0428 00:45:19.735422 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:45:21.080543 kubelet[2481]: E0428 00:45:21.065893 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:45:22.464368 kubelet[2481]: E0428 00:45:22.460361 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:45:41.457019 kubelet[2481]: E0428 00:45:37.395545 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:45.688240 kubelet[2481]: E0428 00:45:44.774014 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:45:47.664367 kubelet[2481]: I0428 00:45:47.662901 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:46:02.515476 kubelet[2481]: E0428 00:46:02.399278 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:46:24.086080 kubelet[2481]: I0428 00:46:24.074647 2481 scope.go:117] "RemoveContainer" containerID="6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88" Apr 28 00:46:35.019346 kubelet[2481]: E0428 00:46:30.306238 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:46:40.268196 kubelet[2481]: E0428 00:46:01.744021 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:42.250979 kubelet[2481]: E0428 00:46:42.244767 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:51.193410 kubelet[2481]: E0428 00:46:51.192164 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:51.840288 kubelet[2481]: E0428 00:46:51.840169 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:46:51.840507 kubelet[2481]: E0428 00:46:50.250302 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:46:52.352972 kubelet[2481]: E0428 00:46:52.341160 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:46:52.796625 kubelet[2481]: E0428 00:46:51.810618 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:52.798332 kubelet[2481]: E0428 00:46:51.839589 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: i/o timeout" interval="7s" Apr 28 00:46:53.164844 kubelet[2481]: E0428 00:46:52.631271 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:46:53.993324 kubelet[2481]: E0428 00:46:53.990697 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:54.373485 kubelet[2481]: E0428 00:46:48.164384 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:55.107624 kubelet[2481]: E0428 00:46:54.457113 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:46:56.283453 kubelet[2481]: E0428 00:46:55.107923 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:46:57.221276 kubelet[2481]: E0428 00:46:57.197941 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:57.221276 kubelet[2481]: E0428 00:46:57.198561 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:46:57.769171 kubelet[2481]: E0428 00:46:55.456196 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:59.659124 kubelet[2481]: E0428 00:46:59.653330 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:47:04.156216 kubelet[2481]: E0428 00:47:04.152844 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:47:04.235104 kubelet[2481]: E0428 00:47:04.234854 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:04.796537 kubelet[2481]: E0428 00:47:03.700160 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:06.750595 kubelet[2481]: E0428 00:47:06.748455 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:47:09.381194 kubelet[2481]: E0428 00:47:09.376913 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:47:10.287301 kubelet[2481]: E0428 00:47:10.286605 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:47:10.358645 kubelet[2481]: E0428 00:47:10.315937 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:47:11.556818 kubelet[2481]: E0428 00:47:11.309147 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:11.775271 kubelet[2481]: E0428 00:47:11.556847 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:47:12.643057 containerd[1454]: time="2026-04-28T00:47:12.642080101Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:3,}" Apr 28 00:47:13.257843 kubelet[2481]: E0428 00:47:13.211508 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:47:15.981298 kubelet[2481]: E0428 00:47:15.972898 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:18.964072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211293262.mount: Deactivated successfully. Apr 28 00:47:21.592966 containerd[1454]: time="2026-04-28T00:47:21.591988123Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:3,} returns container id \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\"" Apr 28 00:47:34.460931 kubelet[2481]: E0428 00:47:34.460465 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:47:34.503605 kubelet[2481]: I0428 00:47:34.467289 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:47:41.263348 kubelet[2481]: E0428 00:47:38.092317 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:44.137815 kubelet[2481]: E0428 00:47:44.127009 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:47:49.956343 kubelet[2481]: E0428 00:47:46.684089 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:55.466521 kubelet[2481]: E0428 00:47:52.760236 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:01.571393 containerd[1454]: time="2026-04-28T00:48:01.066232345Z" level=info msg="StartContainer for \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\"" Apr 28 00:48:03.252366 kubelet[2481]: E0428 00:48:03.234066 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:48:14.382407 kubelet[2481]: E0428 00:48:14.369275 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:48:16.131112 kubelet[2481]: E0428 00:48:06.689417 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:48:18.148359 kubelet[2481]: E0428 00:48:18.135529 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:48:19.312123 kubelet[2481]: E0428 00:48:19.306943 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:48:19.789385 kubelet[2481]: E0428 00:48:19.602059 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:48:24.071497 kubelet[2481]: E0428 00:48:21.854551 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:48:35.198357 kubelet[2481]: E0428 00:48:33.281278 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:48:38.745533 kubelet[2481]: E0428 00:48:38.742326 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:48:43.727067 kubelet[2481]: E0428 00:48:42.106218 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:48:47.738323 kubelet[2481]: E0428 00:48:47.637196 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:48:49.678424 kubelet[2481]: E0428 00:48:49.666214 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:48:50.857692 kubelet[2481]: E0428 00:48:50.841291 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:48:51.330444 kubelet[2481]: E0428 00:48:45.947367 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:48:51.551031 kubelet[2481]: E0428 00:48:51.538533 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:48:53.642341 kubelet[2481]: I0428 00:48:52.161558 2481 scope.go:117] "RemoveContainer" containerID="6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88" Apr 28 00:48:58.760790 kubelet[2481]: E0428 00:48:54.810910 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:48:59.263285 kubelet[2481]: E0428 00:48:57.961862 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:00.312492 kubelet[2481]: E0428 00:48:58.887369 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:49:15.744539 kubelet[2481]: E0428 00:49:15.741829 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:49:31.294357 kubelet[2481]: E0428 00:49:28.713603 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:49:33.431492 kubelet[2481]: E0428 00:49:33.414355 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:49:35.808348 kubelet[2481]: E0428 00:49:35.791539 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:49:36.484331 kubelet[2481]: E0428 00:49:33.436643 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:40.675605 kubelet[2481]: E0428 00:49:36.100331 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:49:44.393424 kubelet[2481]: E0428 00:49:44.358602 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:49:45.976651 kubelet[2481]: E0428 00:49:45.973055 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:49:47.782513 kubelet[2481]: E0428 00:49:47.760522 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:49:49.308758 kubelet[2481]: I0428 00:49:49.266182 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:49:51.653299 kubelet[2481]: E0428 00:49:44.311228 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:49:53.797498 kubelet[2481]: E0428 00:49:51.210462 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:49:55.313575 kubelet[2481]: E0428 00:49:51.336721 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:49:57.468457 kubelet[2481]: E0428 00:49:57.401552 2481 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c" Apr 28 00:49:58.885300 kubelet[2481]: E0428 00:49:56.017591 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:50:00.109639 kubelet[2481]: E0428 00:50:00.097578 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:50:00.568363 kubelet[2481]: E0428 00:50:00.517063 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:00.728626 containerd[1454]: time="2026-04-28T00:50:00.089291785Z" level=info msg="shim disconnected" id=5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c namespace=k8s.io Apr 28 00:50:01.392514 containerd[1454]: time="2026-04-28T00:50:00.336151079Z" level=error msg="Failed to pipe stderr of container \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\"" error="reading from a closed fifo" Apr 28 00:50:02.035933 containerd[1454]: time="2026-04-28T00:50:00.442616381Z" level=error msg="Failed to pipe stdout of container \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\"" error="reading from a closed fifo" Apr 28 00:50:02.660259 containerd[1454]: time="2026-04-28T00:50:00.854531001Z" level=warning msg="cleaning up after shim disconnected" id=5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c namespace=k8s.io Apr 28 00:50:02.934348 containerd[1454]: time="2026-04-28T00:50:02.682365885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:50:04.078096 containerd[1454]: time="2026-04-28T00:50:03.981340987Z" level=error msg="StartContainer for \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\" failed" error="failed to create containerd task: failed to create shim task: context canceled: unknown" Apr 28 00:50:07.500223 containerd[1454]: time="2026-04-28T00:50:07.074007349Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c delete" error="signal: killed" namespace=k8s.io Apr 28 00:50:07.500223 containerd[1454]: time="2026-04-28T00:50:07.477231364Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c namespace=k8s.io Apr 28 00:50:10.626161 kubelet[2481]: E0428 00:50:08.348237 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:50:14.389762 kubelet[2481]: E0428 00:50:13.246467 2481 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.14 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.12:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 28 00:50:16.214366 kubelet[2481]: E0428 00:50:13.722487 2481 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m1.789208523s ago; threshold is 3m0s" Apr 28 00:50:20.765179 kubelet[2481]: E0428 00:50:19.797068 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:50:21.977053 kubelet[2481]: E0428 00:50:21.940134 2481 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m7.877582568s ago; threshold is 3m0s" Apr 28 00:50:22.910230 containerd[1454]: time="2026-04-28T00:50:22.856579132Z" level=info msg="RemoveContainer for \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\"" Apr 28 00:50:27.389807 kubelet[2481]: E0428 00:50:23.467570 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:50:29.203035 kubelet[2481]: E0428 00:50:29.181924 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:30.399514 containerd[1454]: time="2026-04-28T00:50:30.388634785Z" level=info msg="RemoveContainer for \"6065e3297b0ab80a1d1176c387defdd6b1b9967eec7843945c649cd32855fa88\" returns successfully" Apr 28 00:50:31.041189 kubelet[2481]: E0428 00:50:30.904540 2481 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m13.49209291s ago; threshold is 3m0s" Apr 28 00:50:41.706347 kubelet[2481]: E0428 00:50:31.022254 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:50:44.283399 kubelet[2481]: E0428 00:50:41.551398 2481 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m26.798599446s ago; threshold is 3m0s]" Apr 28 00:50:49.093460 kubelet[2481]: E0428 00:50:49.038141 2481 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m35.294005539s ago; threshold is 3m0s]" Apr 28 00:50:49.745363 kubelet[2481]: E0428 00:50:49.687182 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:50.198155 kubelet[2481]: E0428 00:50:50.110142 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:50:51.222833 kubelet[2481]: E0428 00:50:51.216370 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:50:53.746532 kubelet[2481]: E0428 00:50:53.728029 2481 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m40.526125803s ago; threshold is 3m0s" Apr 28 00:50:57.664520 kubelet[2481]: E0428 00:50:57.649255 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:50:58.961362 kubelet[2481]: E0428 00:50:58.751299 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:50:59.017261 kubelet[2481]: E0428 00:50:59.013861 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:50:59.017261 kubelet[2481]: E0428 00:50:59.012435 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:50:59.185288 kubelet[2481]: E0428 00:50:59.184341 2481 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m47.323807504s ago; threshold is 3m0s" Apr 28 00:50:59.356429 kubelet[2481]: E0428 00:50:59.353446 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:50:59.393536 kubelet[2481]: I0428 00:50:59.363391 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:50:59.626749 kubelet[2481]: E0428 00:50:59.619132 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:50:59.633406 kubelet[2481]: E0428 00:50:59.628392 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:50:59.775185 kubelet[2481]: E0428 00:50:59.770859 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:59.775185 kubelet[2481]: E0428 00:50:59.773613 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:59.801127 kubelet[2481]: E0428 00:50:59.800994 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:00.431380 kubelet[2481]: E0428 00:51:00.417011 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:00.566020 kubelet[2481]: E0428 00:51:00.464438 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:00.566020 kubelet[2481]: I0428 00:51:00.464740 2481 scope.go:117] "RemoveContainer" containerID="5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c" Apr 28 00:51:00.566020 kubelet[2481]: E0428 00:51:00.499222 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:00.624504 kubelet[2481]: E0428 00:51:00.579168 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:00.847954 containerd[1454]: time="2026-04-28T00:51:00.840307430Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:4,}" Apr 28 00:51:01.383267 containerd[1454]: time="2026-04-28T00:51:01.382924616Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:4,} returns container id \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\"" Apr 28 00:51:01.580116 containerd[1454]: time="2026-04-28T00:51:01.577913409Z" level=info msg="StartContainer for \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\"" Apr 28 00:51:02.029821 kubelet[2481]: E0428 00:51:02.028890 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:02.034089 kubelet[2481]: E0428 00:51:02.028905 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:02.034089 kubelet[2481]: E0428 00:51:02.033157 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:02.034089 kubelet[2481]: E0428 00:51:02.033256 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:04.428835 kubelet[2481]: E0428 00:51:04.428043 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:04.447349 kubelet[2481]: E0428 00:51:04.428872 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:06.726199 kubelet[2481]: E0428 00:51:06.725463 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:06.989598 kubelet[2481]: I0428 00:51:06.982315 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:07.134636 kubelet[2481]: E0428 00:51:07.129527 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:10.020550 kubelet[2481]: E0428 00:51:10.018574 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:10.032017 kubelet[2481]: E0428 00:51:10.021558 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:13.848627 kubelet[2481]: E0428 00:51:13.760032 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:14.381559 kubelet[2481]: I0428 00:51:14.318123 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:14.415937 kubelet[2481]: E0428 00:51:14.415079 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:16.115941 kubelet[2481]: E0428 00:51:16.114510 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:51:20.093244 kubelet[2481]: E0428 00:51:20.091888 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:20.261115 kubelet[2481]: E0428 00:51:20.245409 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:22.100084 kubelet[2481]: E0428 00:51:22.096302 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:22.634570 kubelet[2481]: I0428 00:51:22.632364 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:22.780192 kubelet[2481]: E0428 00:51:22.778213 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:29.457458 kubelet[2481]: E0428 00:51:29.456743 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:30.675356 kubelet[2481]: E0428 00:51:30.675192 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:30.713616 kubelet[2481]: E0428 00:51:30.713221 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:30.715444 kubelet[2481]: I0428 00:51:30.715347 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:30.762695 kubelet[2481]: E0428 00:51:30.761798 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:36.588573 kubelet[2481]: E0428 00:51:36.586231 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:38.266845 kubelet[2481]: I0428 00:51:38.265873 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:38.518771 kubelet[2481]: E0428 00:51:38.394513 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:40.698078 kubelet[2481]: E0428 00:51:40.682633 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:40.813455 kubelet[2481]: E0428 00:51:40.765162 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:40.898127 kubelet[2481]: E0428 00:51:40.867747 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:40.936404 kubelet[2481]: E0428 00:51:40.927306 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:43.671583 kubelet[2481]: E0428 00:51:43.666311 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:44.515313 kubelet[2481]: E0428 00:51:44.514571 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:51:45.462775 kubelet[2481]: I0428 00:51:45.462221 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:45.466112 kubelet[2481]: E0428 00:51:45.464888 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:45.819189 kubelet[2481]: E0428 00:51:45.811203 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:51:47.188993 kubelet[2481]: E0428 00:51:47.188553 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:51:50.272057 kubelet[2481]: E0428 00:51:50.240184 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:50.690210 kubelet[2481]: E0428 00:51:50.689852 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:50.691822 kubelet[2481]: E0428 00:51:50.690426 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:51:50.711955 kubelet[2481]: E0428 00:51:50.709459 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:52.944373 kubelet[2481]: I0428 00:51:52.943513 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:53.147091 kubelet[2481]: E0428 00:51:53.145784 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:51:57.808467 kubelet[2481]: E0428 00:51:57.807804 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:51:58.490559 kubelet[2481]: E0428 00:51:58.474626 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:51:59.665260 kubelet[2481]: I0428 00:51:59.662254 2481 scope.go:117] "RemoveContainer" containerID="5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c" Apr 28 00:52:00.953580 containerd[1454]: time="2026-04-28T00:52:00.913787863Z" level=info msg="RemoveContainer for \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\"" Apr 28 00:52:01.317213 kubelet[2481]: E0428 00:52:01.117475 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:01.802326 kubelet[2481]: I0428 00:52:00.968606 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:01.802326 kubelet[2481]: E0428 00:52:01.708224 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:01.802326 kubelet[2481]: E0428 00:52:01.709620 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:03.747641 containerd[1454]: time="2026-04-28T00:52:03.746949561Z" level=info msg="RemoveContainer for \"5dd13ed0e530ac093167888ef687ce7bba25e6868f06dc92ba98dce0f27e7d8c\" returns successfully" Apr 28 00:52:04.959774 kubelet[2481]: E0428 00:52:04.954490 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:08.963334 kubelet[2481]: I0428 00:52:08.962817 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:08.983448 kubelet[2481]: E0428 00:52:08.981696 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:11.057064 kubelet[2481]: E0428 00:52:11.034717 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:11.290717 kubelet[2481]: E0428 00:52:11.243992 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:11.710540 kubelet[2481]: E0428 00:52:11.708793 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:11.851472 kubelet[2481]: E0428 00:52:11.846231 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:12.000850 kubelet[2481]: E0428 00:52:11.998616 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:16.255559 kubelet[2481]: I0428 00:52:16.254802 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:16.350462 kubelet[2481]: E0428 00:52:16.255968 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:16.350462 kubelet[2481]: E0428 00:52:16.271263 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:16.350462 kubelet[2481]: E0428 00:52:16.288399 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:19.070848 kubelet[2481]: E0428 00:52:19.069406 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:19.217591 kubelet[2481]: E0428 00:52:19.214246 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:52:21.838017 kubelet[2481]: E0428 00:52:21.808471 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:21.914487 kubelet[2481]: E0428 00:52:21.910164 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:23.528824 kubelet[2481]: I0428 00:52:23.528477 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:23.550739 kubelet[2481]: E0428 00:52:23.545205 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:24.250310 kubelet[2481]: E0428 00:52:24.218379 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:52:26.211091 kubelet[2481]: E0428 00:52:26.206635 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:30.950091 kubelet[2481]: I0428 00:52:30.949938 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:30.951581 kubelet[2481]: E0428 00:52:30.950256 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:31.847282 kubelet[2481]: E0428 00:52:31.846594 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:31.988260 kubelet[2481]: E0428 00:52:31.984448 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:33.469540 kubelet[2481]: E0428 00:52:33.458180 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:38.159028 kubelet[2481]: E0428 00:52:38.143860 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:52:38.467779 kubelet[2481]: I0428 00:52:38.421586 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:38.618266 kubelet[2481]: E0428 00:52:38.613018 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:38.727454 kubelet[2481]: E0428 00:52:38.715504 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:52:40.653295 kubelet[2481]: E0428 00:52:40.652617 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:41.903251 kubelet[2481]: E0428 00:52:41.902392 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:42.063199 kubelet[2481]: E0428 00:52:42.032590 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:45.520583 kubelet[2481]: E0428 00:52:45.517576 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:52:46.168098 kubelet[2481]: I0428 00:52:46.160040 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:46.313134 kubelet[2481]: E0428 00:52:46.310289 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:47.925116 kubelet[2481]: E0428 00:52:47.893008 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:52:51.298347 kubelet[2481]: E0428 00:52:51.295500 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:52:51.944538 kubelet[2481]: E0428 00:52:51.944073 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:52.092109 kubelet[2481]: E0428 00:52:52.091259 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:54.157363 kubelet[2481]: I0428 00:52:54.153003 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:52:54.207596 kubelet[2481]: E0428 00:52:54.206889 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:52:55.469004 kubelet[2481]: E0428 00:52:55.465185 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:02.079422 kubelet[2481]: E0428 00:53:01.889075 2481 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa" Apr 28 00:53:02.257200 kubelet[2481]: E0428 00:53:02.256217 2481 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.14 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.12:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 28 00:53:02.562978 kubelet[2481]: E0428 00:53:02.466321 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:53:02.607589 kubelet[2481]: E0428 00:53:02.603903 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:53:02.639355 kubelet[2481]: I0428 00:53:02.638207 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:02.768629 kubelet[2481]: E0428 00:53:02.621108 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:03.196174 kubelet[2481]: E0428 00:53:03.194705 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:03.211173 kubelet[2481]: E0428 00:53:03.206141 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:04.927386 kubelet[2481]: E0428 00:53:04.926650 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:53:04.927386 kubelet[2481]: I0428 00:53:04.927773 2481 scope.go:117] "RemoveContainer" containerID="c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa" Apr 28 00:53:04.947931 kubelet[2481]: E0428 00:53:04.940003 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:05.414348 containerd[1454]: time="2026-04-28T00:53:05.411692407Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:5,}" Apr 28 00:53:06.402248 containerd[1454]: time="2026-04-28T00:53:06.400324112Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:5,} returns container id \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\"" Apr 28 00:53:06.465104 containerd[1454]: time="2026-04-28T00:53:06.463761868Z" level=info msg="StartContainer for \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\"" Apr 28 00:53:06.822566 containerd[1454]: time="2026-04-28T00:53:06.821727316Z" level=error msg="Failed to pipe stderr of container \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\"" error="reading from a closed fifo" Apr 28 00:53:06.822566 containerd[1454]: time="2026-04-28T00:53:06.822276495Z" level=error msg="Failed to pipe stdout of container \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\"" error="reading from a closed fifo" Apr 28 00:53:06.830117 containerd[1454]: time="2026-04-28T00:53:06.822843991Z" level=info msg="shim disconnected" id=c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa namespace=k8s.io Apr 28 00:53:06.830117 containerd[1454]: time="2026-04-28T00:53:06.823022439Z" level=warning msg="cleaning up after shim disconnected" id=c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa namespace=k8s.io Apr 28 00:53:06.830117 containerd[1454]: time="2026-04-28T00:53:06.823031975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:53:06.838808 containerd[1454]: time="2026-04-28T00:53:06.838238689Z" level=error msg="StartContainer for \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 28 00:53:07.330316 containerd[1454]: time="2026-04-28T00:53:07.325373702Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa delete" error="exit status 1" namespace=k8s.io Apr 28 00:53:07.340537 containerd[1454]: time="2026-04-28T00:53:07.337243126Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa/runtime: no such file or directory: exit status 1" id=c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa namespace=k8s.io Apr 28 00:53:09.263414 kubelet[2481]: E0428 00:53:09.257756 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:53:10.504150 kubelet[2481]: E0428 00:53:10.503314 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:53:10.507425 kubelet[2481]: E0428 00:53:10.504752 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:10.507425 kubelet[2481]: I0428 00:53:10.505447 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:10.507425 kubelet[2481]: E0428 00:53:10.506492 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:12.659718 kubelet[2481]: E0428 00:53:12.656358 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:53:13.095125 kubelet[2481]: E0428 00:53:13.085555 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:17.760274 kubelet[2481]: E0428 00:53:17.749045 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:18.351460 kubelet[2481]: I0428 00:53:18.350047 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:18.399838 kubelet[2481]: E0428 00:53:18.392451 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:22.770019 kubelet[2481]: E0428 00:53:22.768394 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:53:23.316381 kubelet[2481]: E0428 00:53:23.310244 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:23.656911 kubelet[2481]: E0428 00:53:23.628605 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:53:25.175442 kubelet[2481]: E0428 00:53:25.167581 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:26.173488 kubelet[2481]: E0428 00:53:26.171620 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:53:26.419468 kubelet[2481]: E0428 00:53:26.415811 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:26.444288 kubelet[2481]: I0428 00:53:26.436149 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:26.455456 kubelet[2481]: E0428 00:53:26.447830 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:29.188777 kubelet[2481]: E0428 00:53:29.179611 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:53:32.262294 kubelet[2481]: E0428 00:53:32.261806 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:32.798492 kubelet[2481]: E0428 00:53:32.794738 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:53:33.814896 kubelet[2481]: E0428 00:53:33.761650 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:33.882522 kubelet[2481]: E0428 00:53:33.816332 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:33.930626 kubelet[2481]: I0428 00:53:33.881644 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:33.938159 kubelet[2481]: E0428 00:53:33.936944 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:33.943594 kubelet[2481]: E0428 00:53:33.936913 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:34.340603 kubelet[2481]: E0428 00:53:34.340204 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:53:34.348528 kubelet[2481]: E0428 00:53:34.348426 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:35.956205 kubelet[2481]: E0428 00:53:35.945806 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:39.316243 kubelet[2481]: E0428 00:53:39.314491 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:53:39.446654 kubelet[2481]: E0428 00:53:39.347740 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:41.597987 kubelet[2481]: I0428 00:53:41.597505 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:41.857306 kubelet[2481]: E0428 00:53:41.850326 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:42.800568 kubelet[2481]: E0428 00:53:42.799371 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:53:46.040536 kubelet[2481]: E0428 00:53:46.000353 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:46.500189 kubelet[2481]: E0428 00:53:46.499233 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:47.331552 kubelet[2481]: E0428 00:53:47.329383 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:53:47.521452 kubelet[2481]: E0428 00:53:47.519306 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:53:49.615350 kubelet[2481]: I0428 00:53:49.607911 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:50.166302 kubelet[2481]: E0428 00:53:50.164162 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:53:52.860601 kubelet[2481]: E0428 00:53:52.859529 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:53:53.749841 kubelet[2481]: E0428 00:53:53.747448 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:53:55.349298 kubelet[2481]: E0428 00:53:55.348240 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:53:56.159276 kubelet[2481]: E0428 00:53:56.153749 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:53:57.361042 kubelet[2481]: I0428 00:53:57.360077 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:53:57.373135 kubelet[2481]: E0428 00:53:57.364255 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:00.791993 kubelet[2481]: E0428 00:54:00.791126 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:02.972343 kubelet[2481]: E0428 00:54:02.962610 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:54:04.719653 kubelet[2481]: I0428 00:54:04.668279 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:04.771982 kubelet[2481]: E0428 00:54:04.760561 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:04.858729 kubelet[2481]: I0428 00:54:04.858583 2481 scope.go:117] "RemoveContainer" containerID="c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa" Apr 28 00:54:05.250393 containerd[1454]: time="2026-04-28T00:54:05.248719430Z" level=info msg="RemoveContainer for \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\"" Apr 28 00:54:05.683296 containerd[1454]: time="2026-04-28T00:54:05.622081619Z" level=info msg="RemoveContainer for \"c6605e28fb8e3460225ca52b2f3be6017ae2c00d16b2c43405d3476176a93eaa\" returns successfully" Apr 28 00:54:06.191478 kubelet[2481]: E0428 00:54:06.163426 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:54:08.030269 kubelet[2481]: E0428 00:54:07.997972 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:11.119504 kubelet[2481]: E0428 00:54:11.117299 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:54:12.098158 kubelet[2481]: I0428 00:54:12.097881 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:12.124024 kubelet[2481]: E0428 00:54:12.098630 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:13.007409 kubelet[2481]: E0428 00:54:13.006438 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:54:15.260768 kubelet[2481]: E0428 00:54:15.248222 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:16.465239 kubelet[2481]: E0428 00:54:16.319389 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:54:19.812068 kubelet[2481]: I0428 00:54:19.803620 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:19.851923 kubelet[2481]: E0428 00:54:19.836298 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:22.416557 kubelet[2481]: E0428 00:54:22.389146 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:22.547031 kubelet[2481]: E0428 00:54:22.543080 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:54:23.047587 kubelet[2481]: E0428 00:54:23.031566 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:54:26.718270 kubelet[2481]: E0428 00:54:26.709135 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:54:27.267375 kubelet[2481]: I0428 00:54:27.266366 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:27.267375 kubelet[2481]: E0428 00:54:27.267041 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:54:27.359037 kubelet[2481]: E0428 00:54:27.268021 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:28.356611 kubelet[2481]: E0428 00:54:28.356054 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:54:29.714362 kubelet[2481]: E0428 00:54:29.712626 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:33.115359 kubelet[2481]: E0428 00:54:33.112869 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:54:34.664183 kubelet[2481]: I0428 00:54:34.663429 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:34.709582 kubelet[2481]: E0428 00:54:34.708724 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:36.765871 kubelet[2481]: E0428 00:54:36.765222 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:36.765871 kubelet[2481]: E0428 00:54:36.765086 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:54:41.596743 kubelet[2481]: E0428 00:54:41.596066 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:54:42.093244 kubelet[2481]: I0428 00:54:42.092273 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:42.315613 kubelet[2481]: E0428 00:54:42.281490 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:43.181407 kubelet[2481]: E0428 00:54:43.179037 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:54:44.006875 kubelet[2481]: E0428 00:54:43.990036 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:46.979532 kubelet[2481]: E0428 00:54:46.970726 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:54:49.019218 kubelet[2481]: E0428 00:54:49.017380 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:54:49.266286 kubelet[2481]: E0428 00:54:49.145194 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:50.049465 kubelet[2481]: I0428 00:54:50.048135 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:50.402163 kubelet[2481]: E0428 00:54:50.361630 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:51.866623 kubelet[2481]: E0428 00:54:51.682238 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:53.259234 kubelet[2481]: E0428 00:54:53.253884 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:54:55.677417 kubelet[2481]: E0428 00:54:55.673505 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:54:56.529739 kubelet[2481]: E0428 00:54:56.504491 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:57.013544 kubelet[2481]: E0428 00:54:56.968609 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:54:57.231319 kubelet[2481]: E0428 00:54:57.016602 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:54:58.031457 kubelet[2481]: I0428 00:54:58.013648 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:54:58.067991 kubelet[2481]: E0428 00:54:58.062690 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:54:59.028536 kubelet[2481]: E0428 00:54:59.022758 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:54:59.540835 kubelet[2481]: E0428 00:54:59.537571 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:55:02.096535 kubelet[2481]: E0428 00:55:02.093365 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:55:03.397836 kubelet[2481]: E0428 00:55:03.390364 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:05.745525 kubelet[2481]: I0428 00:55:05.744437 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:06.383514 kubelet[2481]: E0428 00:55:06.359386 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:06.811953 kubelet[2481]: E0428 00:55:06.698121 2481 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34" Apr 28 00:55:07.225392 kubelet[2481]: E0428 00:55:07.063155 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:07.658343 kubelet[2481]: E0428 00:55:07.493611 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:07.879922 kubelet[2481]: E0428 00:55:07.657044 2481 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.14 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.12:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 28 00:55:07.879922 kubelet[2481]: E0428 00:55:07.757212 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:55:09.320595 kubelet[2481]: E0428 00:55:09.318129 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:55:10.115375 kubelet[2481]: E0428 00:55:10.107125 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:55:10.245452 kubelet[2481]: I0428 00:55:10.242784 2481 scope.go:117] "RemoveContainer" containerID="df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34" Apr 28 00:55:10.317606 kubelet[2481]: E0428 00:55:10.308066 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:11.587806 containerd[1454]: time="2026-04-28T00:55:11.564595834Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:6,}" Apr 28 00:55:11.715270 containerd[1454]: time="2026-04-28T00:55:11.710493803Z" level=info msg="shim disconnected" id=df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34 namespace=k8s.io Apr 28 00:55:11.715270 containerd[1454]: time="2026-04-28T00:55:11.711387364Z" level=warning msg="cleaning up after shim disconnected" id=df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34 namespace=k8s.io Apr 28 00:55:11.715270 containerd[1454]: time="2026-04-28T00:55:11.711405763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:55:11.771272 containerd[1454]: time="2026-04-28T00:55:11.751593800Z" level=error msg="Failed to pipe stdout of container \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\"" error="reading from a closed fifo" Apr 28 00:55:11.810502 containerd[1454]: time="2026-04-28T00:55:11.767382109Z" level=error msg="Failed to pipe stderr of container \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\"" error="reading from a closed fifo" Apr 28 00:55:12.274575 containerd[1454]: time="2026-04-28T00:55:12.273470219Z" level=error msg="StartContainer for \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 28 00:55:13.564570 kubelet[2481]: E0428 00:55:13.560392 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:14.687826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809168349.mount: Deactivated successfully. Apr 28 00:55:15.279695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2392163033.mount: Deactivated successfully. Apr 28 00:55:15.639304 containerd[1454]: time="2026-04-28T00:55:15.634895207Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:6,} returns container id \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\"" Apr 28 00:55:15.929307 kubelet[2481]: E0428 00:55:15.819187 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:15.965818 kubelet[2481]: I0428 00:55:15.951502 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:16.454622 containerd[1454]: time="2026-04-28T00:55:16.440215631Z" level=info msg="StartContainer for \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\"" Apr 28 00:55:16.574472 kubelet[2481]: E0428 00:55:16.559529 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:16.809633 containerd[1454]: time="2026-04-28T00:55:16.757318519Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34 delete" error="signal: killed" namespace=k8s.io Apr 28 00:55:16.809633 containerd[1454]: time="2026-04-28T00:55:16.800597572Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34 namespace=k8s.io Apr 28 00:55:18.064494 kubelet[2481]: E0428 00:55:18.056128 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:18.161066 kubelet[2481]: E0428 00:55:18.158648 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de4b6b78ab3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,LastTimestamp:2026-04-28 00:30:39.850490547 +0000 UTC m=+27.594026385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:18.892342 kubelet[2481]: E0428 00:55:18.816390 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:23.017710 kubelet[2481]: E0428 00:55:23.015963 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:23.762989 kubelet[2481]: E0428 00:55:23.759545 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:23.956740 kubelet[2481]: I0428 00:55:23.956023 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:23.993472 kubelet[2481]: E0428 00:55:23.991872 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:27.452419 kubelet[2481]: E0428 00:55:27.376636 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:30.175712 kubelet[2481]: E0428 00:55:30.174057 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:31.201161 kubelet[2481]: I0428 00:55:31.199478 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:31.246184 kubelet[2481]: E0428 00:55:31.244159 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:31.281905 kubelet[2481]: E0428 00:55:31.280818 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:55:33.813358 kubelet[2481]: E0428 00:55:33.812021 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:36.516876 kubelet[2481]: E0428 00:55:36.483222 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:55:37.210931 kubelet[2481]: E0428 00:55:37.206291 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:37.574431 kubelet[2481]: E0428 00:55:37.469021 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:38.815582 kubelet[2481]: I0428 00:55:38.804192 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:39.120212 kubelet[2481]: E0428 00:55:38.998316 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:40.276513 kubelet[2481]: E0428 00:55:40.258105 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:55:43.888309 kubelet[2481]: E0428 00:55:43.887339 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:44.297289 kubelet[2481]: E0428 00:55:44.296007 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:46.252376 kubelet[2481]: I0428 00:55:46.216019 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:46.334617 kubelet[2481]: E0428 00:55:46.333392 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:47.492424 kubelet[2481]: E0428 00:55:47.491082 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:49.921288 kubelet[2481]: E0428 00:55:49.920099 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:55:51.399521 kubelet[2481]: E0428 00:55:51.394770 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:55:53.960016 kubelet[2481]: I0428 00:55:53.959280 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:54.116501 kubelet[2481]: E0428 00:55:53.959434 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:54.116501 kubelet[2481]: E0428 00:55:53.961624 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:55:57.555255 kubelet[2481]: E0428 00:55:57.529454 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:58.596100 kubelet[2481]: E0428 00:55:58.593745 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:00.720329 kubelet[2481]: E0428 00:56:00.716929 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:56:01.751033 kubelet[2481]: I0428 00:56:01.732270 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:02.034289 kubelet[2481]: E0428 00:56:02.014609 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:03.362446 kubelet[2481]: E0428 00:56:03.354405 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:56:03.380067 kubelet[2481]: E0428 00:56:03.368296 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:03.509684 kubelet[2481]: E0428 00:56:03.499096 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:04.165339 kubelet[2481]: E0428 00:56:04.159801 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:05.692766 kubelet[2481]: E0428 00:56:05.680929 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:07.595229 kubelet[2481]: E0428 00:56:07.594399 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:08.164281 kubelet[2481]: I0428 00:56:08.163130 2481 scope.go:117] "RemoveContainer" containerID="df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34" Apr 28 00:56:08.726174 containerd[1454]: time="2026-04-28T00:56:08.725615745Z" level=info msg="RemoveContainer for \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\"" Apr 28 00:56:08.886100 containerd[1454]: time="2026-04-28T00:56:08.879259153Z" level=info msg="RemoveContainer for \"df271efac18ae2ceedb93fcbdba5d3e4bf02fbea90f55594cecc6cb37d263f34\" returns successfully" Apr 28 00:56:09.208250 kubelet[2481]: I0428 00:56:09.198585 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:09.339097 kubelet[2481]: E0428 00:56:09.209132 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:09.339097 kubelet[2481]: E0428 00:56:09.333284 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:09.395070 kubelet[2481]: E0428 00:56:09.394306 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:12.730525 kubelet[2481]: E0428 00:56:12.727002 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:14.173438 kubelet[2481]: E0428 00:56:14.172525 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:16.654078 kubelet[2481]: I0428 00:56:16.653622 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:16.667355 kubelet[2481]: E0428 00:56:16.659985 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:17.690275 kubelet[2481]: E0428 00:56:17.650468 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:19.754581 kubelet[2481]: E0428 00:56:19.748077 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:22.566189 kubelet[2481]: E0428 00:56:22.565181 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:56:24.466902 kubelet[2481]: E0428 00:56:24.430221 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:24.597045 kubelet[2481]: I0428 00:56:24.596227 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:24.665360 kubelet[2481]: E0428 00:56:24.663971 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:27.369866 kubelet[2481]: E0428 00:56:27.355306 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:27.692243 kubelet[2481]: E0428 00:56:27.663260 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:56:28.412955 kubelet[2481]: E0428 00:56:28.412121 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:33.414440 kubelet[2481]: E0428 00:56:33.413954 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:56:33.772899 kubelet[2481]: I0428 00:56:33.728106 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:34.762066 kubelet[2481]: E0428 00:56:34.463847 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:35.295474 kubelet[2481]: E0428 00:56:35.260502 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:36.320040 kubelet[2481]: E0428 00:56:36.277195 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:36.843190 kubelet[2481]: E0428 00:56:36.840198 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:56:39.666067 kubelet[2481]: E0428 00:56:39.080104 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:42.727781 kubelet[2481]: E0428 00:56:42.726309 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:56:44.137408 kubelet[2481]: E0428 00:56:44.134404 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:44.859704 kubelet[2481]: I0428 00:56:44.858408 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:45.291511 kubelet[2481]: E0428 00:56:45.286619 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:45.502042 kubelet[2481]: E0428 00:56:45.491252 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:50.209471 kubelet[2481]: E0428 00:56:50.198778 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:51.786275 kubelet[2481]: E0428 00:56:51.694479 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:56:55.659364 kubelet[2481]: E0428 00:56:55.651249 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:56.268634 kubelet[2481]: I0428 00:56:55.664353 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:57.343187 kubelet[2481]: E0428 00:56:57.334941 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:56:59.664593 kubelet[2481]: E0428 00:56:59.660814 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:57:01.749329 kubelet[2481]: E0428 00:57:01.244066 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:02.204801 kubelet[2481]: E0428 00:57:01.754613 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:57:02.204801 kubelet[2481]: E0428 00:57:01.754810 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:57:06.005133 kubelet[2481]: E0428 00:57:05.863045 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:57:06.309285 kubelet[2481]: I0428 00:57:06.299944 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:57:06.470324 kubelet[2481]: E0428 00:57:06.461452 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:57:06.962248 kubelet[2481]: E0428 00:57:06.959951 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:57:08.087108 kubelet[2481]: E0428 00:57:08.081405 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:57:17.620868 kubelet[2481]: E0428 00:57:17.268464 2481 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:57:18.834073 kubelet[2481]: E0428 00:57:18.434068 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e41cf118\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e41cf118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.432241432 +0000 UTC m=+11.175777275,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:19.959369 kubelet[2481]: E0428 00:57:19.420008 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:57:20.867607 kubelet[2481]: E0428 00:57:20.571729 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:57:21.664823 kubelet[2481]: E0428 00:57:19.292303 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de7cfc0bdf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,LastTimestamp:2026-04-28 00:30:53.155425783 +0000 UTC m=+40.898961686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:22.087164 containerd[1454]: time="2026-04-28T00:57:21.662586588Z" level=info msg="shim disconnected" id=64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4 namespace=k8s.io Apr 28 00:57:22.087164 containerd[1454]: time="2026-04-28T00:57:21.682294286Z" level=warning msg="cleaning up after shim disconnected" id=64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4 namespace=k8s.io Apr 28 00:57:22.087164 containerd[1454]: time="2026-04-28T00:57:21.697528982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:57:22.862488 containerd[1454]: time="2026-04-28T00:57:22.250143284Z" level=error msg="Failed to pipe stderr of container \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\"" error="reading from a closed fifo" Apr 28 00:57:22.863480 containerd[1454]: time="2026-04-28T00:57:22.828192445Z" level=error msg="Failed to pipe stdout of container \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\"" error="reading from a closed fifo" Apr 28 00:57:23.392029 containerd[1454]: time="2026-04-28T00:57:23.352065620Z" level=error msg="StartContainer for \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 28 00:57:25.461259 kubelet[2481]: E0428 00:57:25.460499 2481 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.14 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.12:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.14,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 28 00:57:26.962272 containerd[1454]: time="2026-04-28T00:57:26.882336581Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4 delete" error="signal: killed" namespace=k8s.io Apr 28 00:57:26.962272 containerd[1454]: time="2026-04-28T00:57:26.961835578Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4 namespace=k8s.io Apr 28 00:57:31.095476 kubelet[2481]: E0428 00:57:29.460402 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:57:32.431760 kubelet[2481]: I0428 00:57:32.426473 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:57:33.532161 kubelet[2481]: E0428 00:57:33.531087 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:57:34.038900 kubelet[2481]: E0428 00:57:33.850243 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:57:34.038900 kubelet[2481]: E0428 00:57:33.640391 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:57:34.154412 kubelet[2481]: E0428 00:57:34.111174 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:57:34.257328 kubelet[2481]: E0428 00:57:34.154892 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:57:34.392244 kubelet[2481]: E0428 00:57:34.312810 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:57:34.392244 kubelet[2481]: E0428 00:57:34.313952 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:57:34.856428 kubelet[2481]: E0428 00:57:34.855537 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:35.010613 kubelet[2481]: E0428 00:57:34.856790 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:57:35.063431 kubelet[2481]: E0428 00:57:35.062311 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:57:35.734897 kubelet[2481]: E0428 00:57:35.536141 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:36.269755 kubelet[2481]: E0428 00:57:36.269065 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:39.217190 kubelet[2481]: E0428 00:57:39.207362 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:57:39.297136 kubelet[2481]: I0428 00:57:39.226265 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:57:39.688248 kubelet[2481]: E0428 00:57:39.641410 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:42.720006 kubelet[2481]: E0428 00:57:42.668914 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:57:43.866719 kubelet[2481]: E0428 00:57:43.600870 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:57:44.354802 kubelet[2481]: E0428 00:57:44.350370 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:57:44.556385 kubelet[2481]: E0428 00:57:43.561625 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:45.072646 kubelet[2481]: E0428 00:57:44.837597 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:57:46.017591 kubelet[2481]: I0428 00:57:46.014299 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:57:46.177422 kubelet[2481]: E0428 00:57:46.173978 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:57:46.720611 kubelet[2481]: E0428 00:57:46.716942 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:57:46.753889 kubelet[2481]: I0428 00:57:46.748954 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:57:46.777301 kubelet[2481]: E0428 00:57:46.770500 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:46.912508 kubelet[2481]: E0428 00:57:46.891161 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:57:48.019298 kubelet[2481]: E0428 00:57:48.018523 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:57:48.019298 kubelet[2481]: I0428 00:57:48.019531 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:57:48.022379 kubelet[2481]: E0428 00:57:48.020163 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:48.022379 kubelet[2481]: E0428 00:57:48.020851 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:57:52.603465 kubelet[2481]: E0428 00:57:52.548223 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:57:53.847372 kubelet[2481]: I0428 00:57:53.844158 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:57:54.464537 kubelet[2481]: E0428 00:57:54.460396 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:57:54.633394 kubelet[2481]: E0428 00:57:54.622913 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:57:55.059216 kubelet[2481]: E0428 00:57:54.803455 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:59.779420 kubelet[2481]: E0428 00:57:59.766152 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:03.220223 kubelet[2481]: I0428 00:58:03.119639 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:03.652243 kubelet[2481]: E0428 00:58:03.651039 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:04.743378 kubelet[2481]: E0428 00:58:04.738899 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:58:04.743378 kubelet[2481]: I0428 00:58:04.739699 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:58:04.743378 kubelet[2481]: E0428 00:58:04.740014 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:04.743378 kubelet[2481]: E0428 00:58:04.740306 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:58:04.743378 kubelet[2481]: E0428 00:58:04.739243 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:58:05.722747 kubelet[2481]: E0428 00:58:05.685002 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:58:06.912314 kubelet[2481]: E0428 00:58:06.911647 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:11.417969 kubelet[2481]: I0428 00:58:11.417081 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:11.617173 kubelet[2481]: E0428 00:58:11.418409 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:11.617173 kubelet[2481]: E0428 00:58:11.616401 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:58:14.072333 kubelet[2481]: E0428 00:58:14.061144 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:14.822850 kubelet[2481]: E0428 00:58:14.821472 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:58:15.968334 kubelet[2481]: E0428 00:58:15.910430 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:58:18.305942 kubelet[2481]: E0428 00:58:18.300339 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:58:18.368270 kubelet[2481]: I0428 00:58:18.333968 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:58:18.368270 kubelet[2481]: E0428 00:58:18.358967 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:18.465351 kubelet[2481]: E0428 00:58:18.462495 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:58:19.147045 kubelet[2481]: I0428 00:58:19.144932 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:19.180633 kubelet[2481]: E0428 00:58:19.167924 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:21.234564 kubelet[2481]: E0428 00:58:21.233086 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:24.161439 kubelet[2481]: E0428 00:58:24.160038 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:58:24.480770 kubelet[2481]: E0428 00:58:24.479353 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:58:24.911421 kubelet[2481]: E0428 00:58:24.898095 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:58:26.095860 kubelet[2481]: E0428 00:58:26.050529 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:58:27.123910 kubelet[2481]: I0428 00:58:27.122568 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:28.245312 kubelet[2481]: E0428 00:58:27.664554 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:28.318577 kubelet[2481]: E0428 00:58:28.309560 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:58:28.329681 kubelet[2481]: E0428 00:58:28.321341 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:31.895278 kubelet[2481]: E0428 00:58:31.892644 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:58:33.291498 kubelet[2481]: E0428 00:58:33.289530 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:58:33.437042 kubelet[2481]: I0428 00:58:33.388514 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:58:33.517548 kubelet[2481]: E0428 00:58:33.465269 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:33.583681 kubelet[2481]: E0428 00:58:33.527943 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-localhost_kube-system(304f8fe43d8dae9fa1e91eba54f25a22)\"" pod="kube-system/kube-apiserver-localhost" podUID="304f8fe43d8dae9fa1e91eba54f25a22" Apr 28 00:58:34.974013 kubelet[2481]: E0428 00:58:34.960240 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:58:35.713096 kubelet[2481]: E0428 00:58:35.710252 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:36.253943 kubelet[2481]: I0428 00:58:36.253141 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:36.260477 kubelet[2481]: E0428 00:58:36.258493 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:36.260477 kubelet[2481]: E0428 00:58:36.258157 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:58:40.653692 kubelet[2481]: E0428 00:58:40.653147 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:58:40.665281 kubelet[2481]: E0428 00:58:40.656437 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:42.923932 kubelet[2481]: E0428 00:58:42.919973 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:43.510647 kubelet[2481]: E0428 00:58:43.505641 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:58:43.697570 kubelet[2481]: I0428 00:58:43.696735 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:43.749548 kubelet[2481]: E0428 00:58:43.698970 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:45.011373 kubelet[2481]: E0428 00:58:44.998552 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:58:46.421156 kubelet[2481]: E0428 00:58:46.343029 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:58:49.718319 kubelet[2481]: E0428 00:58:49.708589 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:58:49.839444 kubelet[2481]: I0428 00:58:49.817810 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:58:49.842000 kubelet[2481]: E0428 00:58:49.841468 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:50.667614 kubelet[2481]: E0428 00:58:50.654393 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:51.619619 containerd[1454]: time="2026-04-28T00:58:51.619195832Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:7,}" Apr 28 00:58:51.695025 kubelet[2481]: I0428 00:58:51.643495 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:58:52.301365 kubelet[2481]: E0428 00:58:52.161128 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:58:55.848769 kubelet[2481]: E0428 00:58:55.839022 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:58:55.850252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393635987.mount: Deactivated successfully. Apr 28 00:58:56.259856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711565518.mount: Deactivated successfully. Apr 28 00:58:57.744824 containerd[1454]: time="2026-04-28T00:58:57.741844137Z" level=info msg="CreateContainer within sandbox \"112d6a952e8c7cd4fa8ae5b31482ed1db437f6a8d282211e4b68737cda330154\" for &ContainerMetadata{Name:kube-apiserver,Attempt:7,} returns container id \"93359bf4e572e2f7f9887c95628634946110ca0cbc20cd162a8b9231fd51936b\"" Apr 28 00:58:58.049614 kubelet[2481]: E0428 00:58:57.763553 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:58:58.953803 kubelet[2481]: E0428 00:58:58.663288 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:58:59.492537 containerd[1454]: time="2026-04-28T00:58:59.455928302Z" level=info msg="StartContainer for \"93359bf4e572e2f7f9887c95628634946110ca0cbc20cd162a8b9231fd51936b\"" Apr 28 00:59:00.401693 kubelet[2481]: I0428 00:59:00.401099 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:00.791137 kubelet[2481]: E0428 00:59:00.759603 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:03.645587 kubelet[2481]: E0428 00:59:03.644868 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:59:04.901347 kubelet[2481]: E0428 00:59:04.900844 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:05.650460 kubelet[2481]: E0428 00:59:05.649120 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:59:05.663494 kubelet[2481]: E0428 00:59:05.663085 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:05.890106 kubelet[2481]: E0428 00:59:05.889180 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:08.211755 kubelet[2481]: I0428 00:59:08.210194 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:08.229972 kubelet[2481]: E0428 00:59:08.226007 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:59:08.265097 kubelet[2481]: E0428 00:59:08.263692 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:08.979159 kubelet[2481]: E0428 00:59:08.963061 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:11.962022 kubelet[2481]: E0428 00:59:11.954597 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:14.687930 kubelet[2481]: E0428 00:59:14.680062 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:59:15.468725 kubelet[2481]: E0428 00:59:15.468189 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:59:15.600087 kubelet[2481]: I0428 00:59:15.598895 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:15.609865 kubelet[2481]: E0428 00:59:15.601983 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:16.011073 kubelet[2481]: E0428 00:59:16.010373 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:19.057577 kubelet[2481]: E0428 00:59:19.051297 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:19.116628 kubelet[2481]: E0428 00:59:19.058035 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:23.450470 kubelet[2481]: I0428 00:59:23.450144 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:23.484724 kubelet[2481]: E0428 00:59:23.450821 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:23.484724 kubelet[2481]: E0428 00:59:23.450863 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:59:23.822849 kubelet[2481]: I0428 00:59:23.801017 2481 scope.go:117] "RemoveContainer" containerID="64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4" Apr 28 00:59:24.239356 containerd[1454]: time="2026-04-28T00:59:24.235363721Z" level=info msg="RemoveContainer for \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\"" Apr 28 00:59:24.602935 containerd[1454]: time="2026-04-28T00:59:24.588953443Z" level=info msg="RemoveContainer for \"64c57b33903cffeac2f72b1516b744e22d067007e85781edc8046a6d5452eee4\" returns successfully" Apr 28 00:59:26.095653 kubelet[2481]: E0428 00:59:26.094148 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:26.095653 kubelet[2481]: E0428 00:59:26.095323 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:29.148270 kubelet[2481]: E0428 00:59:29.146084 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e4d7d82a\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e4d7d82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.444490282 +0000 UTC m=+11.188026121,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:29.167972 kubelet[2481]: E0428 00:59:29.148168 2481 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de7fe93ae65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,LastTimestamp:2026-04-28 00:30:53.941001829 +0000 UTC m=+41.684537707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:29.167972 kubelet[2481]: E0428 00:59:29.149421 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e692047b\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:54.876570224 +0000 UTC m=+42.620106072,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:31.942429 kubelet[2481]: E0428 00:59:31.855030 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e692047b\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:54.876570224 +0000 UTC m=+42.620106072,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:32.246400 kubelet[2481]: I0428 00:59:31.943251 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:32.314636 kubelet[2481]: E0428 00:59:32.259182 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:33.143175 kubelet[2481]: E0428 00:59:33.142187 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:36.212592 kubelet[2481]: E0428 00:59:36.211217 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:40.312344 kubelet[2481]: E0428 00:59:40.311455 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:59:40.495462 kubelet[2481]: E0428 00:59:40.312076 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:40.495462 kubelet[2481]: I0428 00:59:40.312932 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:40.497515 kubelet[2481]: E0428 00:59:40.495737 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:42.024986 kubelet[2481]: E0428 00:59:42.019282 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e692047b\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:54.876570224 +0000 UTC m=+42.620106072,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:42.220177 kubelet[2481]: E0428 00:59:42.219603 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:59:42.254961 kubelet[2481]: E0428 00:59:42.220602 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:46.275174 kubelet[2481]: E0428 00:59:46.272805 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:47.267518 kubelet[2481]: E0428 00:59:47.262175 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:59:47.611611 kubelet[2481]: E0428 00:59:47.599866 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:47.800618 kubelet[2481]: I0428 00:59:47.799997 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:47.838605 kubelet[2481]: E0428 00:59:47.829591 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:48.862143 kubelet[2481]: E0428 00:59:48.861264 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:59:52.122445 kubelet[2481]: E0428 00:59:52.113391 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e692047b\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:54.876570224 +0000 UTC m=+42.620106072,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:54.773985 kubelet[2481]: E0428 00:59:54.765516 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 00:59:55.208429 kubelet[2481]: I0428 00:59:55.207085 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:55.221154 kubelet[2481]: E0428 00:59:55.218016 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 00:59:56.308619 kubelet[2481]: E0428 00:59:56.287142 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:03.298432 kubelet[2481]: E0428 01:00:03.185492 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e692047b\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:54.876570224 +0000 UTC m=+42.620106072,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:00:03.860515 kubelet[2481]: E0428 01:00:03.792295 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 01:00:04.897558 kubelet[2481]: E0428 01:00:04.878528 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:00:05.369287 kubelet[2481]: I0428 01:00:05.368708 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:00:05.467500 kubelet[2481]: E0428 01:00:05.459215 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 01:00:06.372624 kubelet[2481]: E0428 01:00:06.341413 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:12.764215 kubelet[2481]: E0428 01:00:12.406121 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 01:00:14.368573 kubelet[2481]: E0428 01:00:13.996486 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/default/events/localhost.18aa5de0e692047b\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de0e692047b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:23.473468539 +0000 UTC m=+11.217004401,LastTimestamp:2026-04-28 00:30:54.876570224 +0000 UTC m=+42.620106072,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:00:17.356934 kubelet[2481]: E0428 01:00:17.351912 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:17.761091 kubelet[2481]: I0428 01:00:17.760144 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:00:17.960509 kubelet[2481]: E0428 01:00:17.918500 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 28 01:00:19.721410 kubelet[2481]: E0428 01:00:19.720888 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:00:19.931277 kubelet[2481]: E0428 01:00:19.930997 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 28 01:00:19.931277 kubelet[2481]: E0428 01:00:19.930939 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:00:23.144175 sudo[1635]: pam_unix(sudo:session): session closed for user root Apr 28 01:00:23.606297 sshd[1632]: pam_unix(sshd:session): session closed for user core Apr 28 01:00:24.521904 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:34538.service: Deactivated successfully. Apr 28 01:00:24.907757 kubelet[2481]: E0428 01:00:24.463386 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"